Tag Archives: semtech2008

SemTech 2008: Eric Miller (Zepheira) – "Reuse, Repurpose, Remix"

Eric Miller from Zepheira gave the second keynote talk yesterday, talking about some of their open-source development activities that have reached a level that he thought we might be interested in. The aim of the talk was to show that it is possible to reduce the costs for people who are interested in mixing together data from lots of different sources while hiding a lot of the complexity that makes that happen.

He began with a story about when his dad was in hospital with cancer: it was a comedy of errors (“many errors with not much comedy”). As he went from one department to another, they couldn’t correlate any information because their patient care model had no primary key to aid with the combination of that information. Talking to the doctors and others in this space, Eric realised how alarmingly frequent that it is. Common statements were “the systems weren’t designed to do that”, “we can’t do that”, etc., resulting in general frustration. That pattern in the hospital is repeated across various businesses and organisations. Eric said that there are too many important things that we as a community need right now, so we need a useful reusable infrastructure to solve various problems, and one way is to use the Web. We can bring lessons learned from the Web back into these organisations.

He then moved on to talk about some of the things we can do to make the required bridges stronger. There’s a common theme (when talking to different people and groups in health, climate change, etc.) of a requirement for such bridging technologies. A lot of the solutions exist, so we just have to stick the parts of the answer together. If we could figure out how to connect these together, then we can have a serious jump on the problem(s). Lessons from the Web (and the Semantic Web) can be applicable to managing information from these enterprise or organisational spaces.

He talked about a document analogy. A big change on the Web from several years ago was the blog. Before then, the so-called Read/Write Web had a disproportionate amount of the “read” aspect to it. People began adding little bits of structure to the creation of content in blogs. We can take advantage of likeness factors or patterns in communities (of bloggers): it’s a very powerful aspect. This little bit of structure can feed into larger communities, e.g. Technorati leverages the structure from multiple blogs.

He then talked about a music analogy. Sid Vicious did Sinatra’s “My Way”. Apple’s GarageBand reduced the technical barriers for people to reuse lyrics and music, allowing people to get more creative about how they could use each other’s data. Recently, NIN made their multi-track files available for remixing. Just as in the document analogy, this is adding more structure to the content which allows people to take this and do more with it. This also takes advantage of the network effect, by leveraging multiple community contributions across available repurposable data (not just for one song or one individual). As a result, we get services like MusicBrainz where we can also see patterns around music.

In this way, we can stop worrying so much about whether it is a spreadsheet, a database, whatever. [These are all just parts that can be brought together, and you don’t have to settle on a particular format or storage mechanism to progress.]

From an action standpoint, Eric said that this corresponds to: create, publish, and analyse. For documents, the corresponding action stream is from creating a blog text to publishing on the Blogger website to mass analyses via Technorati. For music, this could be from creating a song in GarageBand to publishing via iTunes to analysis in MusicBrainz. Finally, for data, Eric will show us this process using Exhibit, Remix and Studio.

He gave a demo of Exhibit from MIT SIMILE. Exhibit is a software service for rendering data. You ship data to it and you get back a facetted navigation system. You don’t need to install a database, and you don’t have to create a business logic tier. You can style it in different ways, and look at it in different “lenses”.

Remix is a tool that builds on top of this. Eric is one of the PIs of the project. It ties together best-agreed components – visual interfaces, data transformation interfaces, data storage, etc. – all of this is brought together under the Remix umbrella. Eric also mentioned that Remix leverages persistent identifiers using purlz.org. These can be for people, places, concepts, network objects, anything.

He presented an example of data that an oncology nurse or doctor uses frequently, which is not in an ontology: some of it is in their head and the rest is in a spreadsheet. He showed Remix stitching together two spreadsheets from different clinics for oncology. You can stitch together fields and see if it makes sense from a data perspective. Remix has some tools for “simultaneous editing” which allows editing over patterns of data, so by editing one entry you can edit all of them. This acts like a script which can change “lastname, firstname” to “firstname lastname” without any complicated programming. You can connect anything, but it may not necessarily make sense, so there’s a need for interfaces to show users if it does makes sense. Then in Exhibit, you can customise facets, views, apply different themes, etc. Within a matter of minutes, Remix gives tools that a nurse can use to not just create an interface but to publish the information to the Web so that other people can benefit from it.

Every bit of the transformation that has occurred here has been identified (with an identifier). Everything has become a web resource, with a framework that enables people so stitch stuff together in a resource-oriented architecture. Then this can be analysed using Studio. If Technorati provides real-time analysis of RSS feeds, Studio provides an analysis of your company or organisational data, e.g. as reports with pattern analysis. Because it’s based on RDF / SPARQL, you can create queries that are relevant to you: “show me all the most popular or least popular reports”, or “show me any reports that used some of my data”.

This can bring organisations into a “Linked Enterprise Data” (LED) framework. Some people may not care about so much about Linked Open Data (LOD): “expose your data, and something cool is going to happen”. Rather, Eric talked about exposing your enterprise data and showing that something is going to happen right now, so that you can see the benefits in terms of solutions available immediately. LED is a big part of what they’ve been focussing on in Zepheira.

The key subtext is recognising that what we’re dealing with is hospitals, organisations etc., who can leverage lots of the standards and solutions that we’ve been using on the Web but at a larger scale. Tools like this are a critical aspect of what companies can take now and can start to use to link their data together.

Eric said that there are huge advantages for companies to not just be “on” the Web but to be “in” the Web. If employees are a company’s most important aspect, why tie their hands behind their backs and ask them to solve a particular problem without providing them with the means to do it? There’s a need to empower them, to make it easier for them to get at data, to integrate it and to share it. There are just too many problems not to address / attack them aggressively through not just one approach or representation, but by stitching various parts together.

Eric finished by challenging ten companies to try out these tools if they haven’t before, to come back to SemTech 2009 with reports, and to share each other’s knowledge. The standards and tools are robust, so it can be done.

Advertisements

SemTech 2008: Nova Spivack (Radar Networks) – "Experience from the Cutting Edge of the Semantic Market"

Nova Spivack of Radar Networks gave a keynote talk at the 2008 Semantic Technologies Conference this morning.

He started off by giving some background to Twine. Twine is a service that lets you share what you know. When Nova pitched the original idea for the underlying platform to VCs in 2003, he was told that it was a technology in search of a problem. Thanks to DARPA and SRI, Nova had carried out some research in this field for a few years. The intial proposal to VCs was to develop next-generation personal assistants based on the Semantic Web. After the initial knock back, Nova went out again to raise funding, and Paul Allen stepped in as the first outside angel with Vulcan Capital.

Radar started working on the first commercial version of the underlying platform and also began work on the Twine application. The platform underneath Twine is not something they’ve talked about much so far, and they will discuss it (not at this conference) in the Fall. Radar also want to allow non-Semantic Web savvy people to build applications that use the Semantic Web without doing any programming.

Twine was announced last October at the Web 2.0 Summit. They began the invite-only beta soon after that. The focus of Twine is interests. It’s a different type of social network. Facebook is often used for managing your relationships, LinkedIn for your career, and Twine is for your interests. He called it “interest networking” as opposed to social networking.

With Twine, you can share knowledge, track interests with feeds, carry out information management in groups or communities, build or participate in communities around your interests, and collaborate with others. The key activities are organise, share and discover.

Twine allows you to find things that might be of interest to you based on what you are doing. The key “secret sauce” is that everything in Twine is generated from an ontology. The entire site – user interface elements, sidebar, navbar, buttons, etc. – come from an application ontology.

Similarly, the data is modelled on an ontology. Twine isn’t limited to these ontologies. Radar are beginning the process of bringing in other ontologies and using them in Twine. Later, they will allow people to make their own ontologies (e.g. to express domain specific stuff). In the long run, the community infrastructure will allow people to have a more extensible infrastructure.

Twine does natural language processing on text, mainly providing auto tagging with semantic capabilities. It has an underlying ontology with a million instances of thousands of concepts to generate these tags (right now, they are exposing just some of these). Radar are also looking at statistical analyses or clustering of related content, more of which we will see in the Fall (mainly, which people, items and interests are related to each other). For example, “here are bunch of things that are all about movies you like”. Twine uses machine learning to create these clusters.

Twine search also has semantic capabilities. You can filter bookmarks by the companies they are related to, or filter people by the places they are from. Underneath Twine, they have also done a lot of work on scaling.

Consumer prime-time launch of Twine is slated for the Fall. A good few bugs still have to be addressed, but Nova says there has been a “wonderful flowering of participation and friendships” in Twine. Many networks of like-minded people with common interests are being formed, and it is very interesting to see this take place. Nova himself has 500 contacts in Twine, and just 300 in Facebook. He now uses it as his main news source. David Lewis (the top Twiner) has 1000+ contacts in Twine. David Lewis (also at the conference) has nearly 1500 contacts in Twine.

Twine wants to bring semantics to the masses, and is not just aiming at Semantic Web researchers: it has to be mainstream. The main common thread in feedback received is that the interface needs to be simplified more. (Nova says he shaved his head as part of this new simpler interface :-)) Someone who knows nothing about structured data or auto tagging should be able to figure out in a few minutes or even seconds how to use it. It takes a few days at the moment to get a sense of the value, but Nova says it can be very addictive when you get into it.

Individuals are the first market, even if you are on your own and don’t have any friends 🙂 It is even more valuable if you are connected to other people, if you join groups, giving a richer network effect. The main value proposition is that you can keep track of things you like, people you know, and capturing knowledge you think is important.

Motley Fool recently talked about Google killers. Twine is not one, according to Nova, as it is not trying to index the entire Web. Twine is about the information that you think is important, not everything available. Twine also pulls in related things (e.g. from links in an e-mail), capturing information around the information that you bring in.

When groups start using Twine, collective intelligence starts to take place (by leveraging other people who are researching stuff, finding things, testing, commenting, etc.). It’s a type of communal knowledge base similar to other things like Wikia or Freebase. However, unlike many public communal sites, in Twine more than half of the data and activities are private (60%). Therefore privacy and permission control is very important, and it goes deep into the Twine data.

Initially Radar had their own triple store, an LGPL one from the CALO project. They found that it didn’t scale towards web-scale applications, and it didn’t have the levels of transaction control you’d need from an enterprise application. They decided to go for a SQL database (PostgreSQL) with WebDAV. However, relational databases weren’t optimised for the “shape” of data that they were putting into it, so it needed to be tweaked. They’ve had no performance issues so far, but they may move to a federated model next year. Twine uses an eight-element tuple store (subject-predicate-object, provenance, time stamp, confidence value, and other statistics about the triple or item itself). They can do predicate inferencing across statements, access control, etc. The platform is all written in Java, and Twine then sits on top of that.

Next he talked about the Twine beta status. There have been 20000 beta testers in last 30 days, 9000 twines created, 150000 items added, 60% of twines are private, and new features are being added every four weeks (in point releases). Some of the feature requests they’ve received include import capabilities, interoperability with other apps, and the ability to use other ontologies.

Twine will stay in invite beta for the summer. Soon, they will take off the password door to the public twines, so that they will all be visible to search engines. Radar will be SEO-ing the content automatically, so you will see more “walk-ins” after that happens. They will still be able to control who gets an account, but stuff will be publicly accessible.

In the Fall, Radar will open it so that anyone can open an account. You will be able to really customise Twine, to author and develop rich semantic content. Nova says that Twine will then be a step beyond blogs and wikis when it happens (but he can’t say much about the new stuff for now).

Next, there were some questions.

Q: The first one was about privacy. What if you add something and then later you decide that you want to delete it – is it really deleted or does Twine keep it around?

A: Nova answered that currently, it is not really deleted, it goes into a non-visible triple. But they will be doing that (really deleting it) soon.

Q: What is the approach to interoperability with Twine? What other types of semantic applications will Twine work with?

A: Today, Twine works with e-mail (in / out), RSS (get feeds out), and browsers (e.g. for bookmarking). There have been lots of requests for interoperability with mindmaps, various databases, enterprise applications, etc., so Radar are giving it a lot of thought. Twine has to provide APIs. They have a REST and a SPARQL API: they are not fully ready just yet, but by end of the year Twine will have a usable REST API. Unfortunately, Radar can’t handle the long tail of requests for features, there’s just too much, but an API will help people to make their own add-ons.

Then there’s the ontology level. You will be able to get the data about you or related to you out of Twine in RDF. You should also be able to get stuff out using other ontologies that are common, e.g. using FOAF, SIOC (yay!), or Dublin Core.

They are also looking at specific adaptors that they need to build. For example, this includes importers for del.icio.us, Digg, desktop bookmark files, Outlook contacts, and a bunch of others. They will be rolling out some of these in the Fall timeframe. Also, there may be a demand for Lotus Notes interoperability – or Exchange – possibly. Radar may actually look at other semantic applications like Freebase that they could interoperate with first. They have already hardcoded in some interoperability with Amazon for example.

Q: When Radar went to VCs and were turned down, was Twine part of the pitch? (For the second time around with Paul Allen, the questioner presumed that Nova did have it as part of the pitch.)

A: In 2003, Radar had a desktop-based semantic tool called “Personal Radar”. It was basically a Java-based P2P “Twine” using RDF. It had lots of eye candy and visualisations. The VCs said “semantic what?” and it was extremely hard to explain P2P, Semantic Web, RDF, and knowledge sharing to them. He said the VCs are mainly interested in when you are going to make money for them. But most of his pitch was blue sky, with no business plan, demonstrating a piece of technology, and pushing the fact that he knows people will need it. Paul Allen was more visionary, and he really believes adding structure to the Web is inevitable. He was willing to take a bet before they were in business. Then they went on to get Series A funding. The VCs said it was too early, but they eventually got it. Series B wasn’t as hard, and it fell into place in a matter of weeks, so it was a good round.

Even though there’s a lot of talk about the Semantic Web in the press and on the Web, most VCs are still figuring it out now and they are interested in making just one bet in the space. The main thing you need to avoid is being a platform without having any applications to show. It has to be compelling, where you can envisage users using them. Valley VCs are jaded about platforms.

Q: As one imports information from various places, what exactly is there in Twine that will prevent a person having to merge any duplicate objects?

A: Nova said there is limited duplication detection at the moment, but this will be improved in a few months. Most people submit similar bookmarks and it is reasonably straightforward to identify these, e.g. when the same item is arrived at through different paths on a website and has different URLs.

Q: Ivan Herman from the W3C asked if Radar were considering leveraging the linked open data community?

A: Nova said that DBpedia would be one of those main sources of data that they want to integrate with – the FOAF-scape, the SIOC-o-sphere, and DBpedia. Wikipedia URIs are already being used to identify tags, and this is something they will leverage.

Q: How can copyright be managed in Twine?

A: Nova said that it’s thanks to the Digital Millennium Copyright Act (DMCA). It provides a safe harbour if you cannot reasonably prevent against anything and everything being uploaded (and are unaware of it). Twine’s user agreement says please do not add other people’s copyright material. Fair use is okay, and if you share something copyrighted, it is better to have a blurb with a link to the main content. Therefore, Twine is using the same procedure as in other UGC sites.

Q: How are Radar going to make money?

A: Twine is focused on advertising as the first revenue stream. Twine has semantic profile of users and groups, so it can understand their interests very well. Twine will start to show sponsored content or ads in Twine based on these interests. If something is extremely relevant to your interests, then it is almost like content (even if it is sponsored). They will be pilot testing this advertising soon.

Q: Have Radar been approached by Google, Facebook, as the value proposition for Twine is very interesting?

A: Nova said they are not trying to compete with Facebook (right now!), but rather they are trying to find the magic formula that will work for Twine right now. Facebook has a lot of fluffy stuff: vampires, weird games, etc. Nova said he’d prefer to spin the bottle with a real person. Twine will focus on professional people who have a stronger need for a particular interest, doing things technically that are outside the scope of what they are doing at the moment.

Q: Why does Twine use tuple storage: why is it not using a quad?

A: Nova said it’s faster in their system, so for performance reasons they decided to avoid reification.

(I will also post my notes from Eric Miller’s keynote in the next day or three.)

SemTech sessions related to data portability / IEEE Computing article on portable data

It’s been a busy few weeks for DataPortability.org with announcements from many sides including Google (Friend Connect), Facebook (Connect) and MySpace (Data Availability). Next week, the Semantic Technologies Conference will be held in San Jose, California, and you can bet that discussions around the need for portable data will be scattered throughout.

  • On Monday, Stefan, Uldis and I will present a tutorial (which will also cover data portability aspects of ontologies such as SIOC and FOAF) entitled “The Future of Social Networks: The Need for Semantics“.
  • On Monday evening at 8 PM, there will be an informal meetup of some DataPortability.org people in the Fairmont Hotel’s Lobby Lounge, so if you have an interest in data portability, feel free to join us.
  • On Tuesday at 7:15 AM, I will chair a “Data Portability Interest Group” meeting. Attendees will include Chris Saad, Daniela Barbosa, Henry Story, and yours truly.
  • Then on Tuesday afternoon at 2:00 PM, Jim Benedetto, Senior Vice President of Technology with MySpace will talk about “Data Availability at MySpace“.

Last month, IEEE Computing published an article by Karen Heyman entitled “The Move to Make Social Data Portable“. I was interviewed for the piece along with Michael Pick (social media expert), Duncan Riley (b5media), John McCrea (Plaxo), Craig Knoblock (ISI), Chris Saad (DataPortability.org), Dave Treadwell (Microsoft), Kevin Marks (Google), Chris Kelly (Facebook), Marc Canter (Broadband Mechanics), and Bill Washburn (OpenID). Technology solutions mentioned included RSS, OpenID, OAuth, microformats, RDF, APML, SIOC and FOAF. Here are my original answers to Karen’s questions.

Continue reading SemTech sessions related to data portability / IEEE Computing article on portable data