Slides from the SIOC tutorial at WWW2008

Here are the PowerPoint slides from our tutorial on “Interlinking Online Communities and Enriching Social Software with the Semantic Web” at the World Wide Web Conference in Beijing – you can also download them from here:

The tutorial went well, it was hot in the room and we were a bit jetlagged, but we had some good feedback afterwards and about 30 people attended in all.

I had a nice few days in Beijing, participating in the W3C advisory commitee meeting on Sunday, Monday and Tuesday, giving our SIOC tutorial with Alex and Uldis on Monday afternoon, popping along to our paper at the Linked Data on the Web workshop on Tuesday, attending some sessions on Wednesday (Kai-Fu Lee’s plenary keynote on Cloud Computing, the discussion panel with Lada Adamic et al. on the Future of Online Social Interactions, the W3C Open Your Data! track, and a packed session on Social Networks: Discovery and Evolution of Communities). On Thursday, I gave a talk about DERI at Tsinghua University to Cemon Yang and his team at the Digital Government / Web and Software Research Centre. Thursday evening we had the banquet in the Great Hall of the People, and I headed back to Ireland on Friday.

Unfortunately I saw little of Beijing outside of travelling between venues in taxis and buses, so I have a good reason to return and see / do more next time…

WWW2008 Beijing: Dr. Kai-Fu Lee (Google) – "Cloud Computing"

Kai-Fu Lee is Vice President of Engineering at Google, and President of Google Greater China. He joined Google in 2005, and developed the first speaker-independent continuous speaker recognition system, for which he won a Business Week award in 1988.

He started by talking about the “people theme”, saying that this is what the (Chinese) Internet is all about. (For April Fool’s Day, Google China announced that they were going to shut down their servers to save electricity, and that they would have to hire 25 million people to do their searches for them. They got 1,800 resumes for the positions.)

There are 235 million people on the Internet in China. What do these people want? Kai-Fu listed these things: accessibility, shareability, freedom (data wherever they are), simplicity, and security. Google believes that cloud computing solves a lot of these problems. It’s not new, so Google are just a part of it like we all are. But day by day, cloud computing is changing the way we use the Internet.

He then explained a little bit about what the Cloud is. Data is stored in the Cloud, on some server somewhere that is not necessarily known by the user, but it’s just there and accessible. Software and services are also moving to the Cloud, usually accessible via a full-featured web browser on the client device. He also advocated the use of open standards and protocols, which he says are “liked” by Google (e.g. Linux, AJAX, LAMP, etc.) so as to avoid control by one company. Finally, the Cloud should be accessible from any device, especially from phones. He said that when the Apple iPhone hit the market, they found that web usage from that device was 50 times greater than that from other web-capable phones, and that Google’s servers really felt it.

Next up was a history lesson on cloud computing. The PC era was hardware centric. Then, the client-server era was more software centric, which was great for enterprise computing. Cloud computing now abstracts that server and makes it very scalable, by hiding complexities, and with the server being anywhere. This is service centric.

Banks too have become “Clouds”, allowing people to go to any ATM and remove money from their bank wherever they are. Electricity can be thought of similarly, as it can come from various places, and you don’t have to know where it comes from: it just works.

Driving forces behind cloud-based computing include: (i) the falling cost of storage, (ii) ubiquitous broadband, and (iii) the democratisation of the tools of production. This is beginning to make cloud-based computing more like a utility. A lot of this is due to IBM and DEC’s work in the 1990s, who realised that computing should be a utility. It is only now that these three key things are in place that this becoming a reality.

There are six further properties that make this area exciting, being: (1) user centric, (2) task centric, (3) powerful, (4) accessible, (5) intelligent, (6) programmable.

(1) User centric. The data moves with you, and the application moves with you. People don’t want to reload their address book or applications on new machines, as it is painful to do. For example, how bad do you feel if you drop or break your laptop? How easy is it to switch your cellphone? It’s hard, because synchronising your data is usually hard to do. The IR functionality on a mobile is not easy to use / user centric: how often do people use it to backup stuff to their laptops?

If data is all stored in the Cloud – images, messages, whatever – once you’re connected to the Cloud, any new PC or mobile device that can access your data becomes yours. Not only is the data yours, but you can share it with others (e.g. on Picasa Web, your photos are stored in the Cloud). You don’t have to worry about where it is. We’re not there just yet, but the time is approaching where the way we deal with photographs will change. Another example is GMail, as you can use it on any device (since large storage is not required on the device). Kai-Fu bets that everyone in the room has some kind of cloud computing-based e-mail.

PCs are normally our window to the world, but mobile devices can do more. Since services know who you are and where you are (eek!), they can give you more targetted content. There are 600 million cellphone users in China, three billion worldwide, dwarfing the number of PCs that are Internet-accessible. Intelligent mobile search is useful for cellphones, giving you local listings and results relevant to your context. The most powerful and popular application is maps, especially when people get lost, or if they spontaneously want to go somewhere. Maps are more than the traditional flat piece of paper, allowing you to search nearby, see real-time traffic flows, etc. Such mashups provide even more power – calling these integrations a map is a misnomer – the capabilities are enormous. As there’s a move from e-mail usage towards maps and photos, these new applications have to go into the Cloud as well. And with the shift in this direction, another question is how do you make this economic?

Instant information sharing is also important, e.g. via Google Docs, Page Creator, etc. Recently, Google Sites was released – Google hosts it all for you, so there’s no need for you to buy servers or hosting – 50,000 sites were set up in the first few hours after it began. Not only can you access the data, but you can create it anywhere. The browser is the platform.

(2) Task centric. The applications of the past – spreadsheets, e-mail, calendar – are becoming modules, and can be composed and laid out in a task-specific manner. For example, a task may be teachers creating a departmental curriculum, where you can see the people viewing the curriculum spreadsheet and they can have debates in parallel in real time. Spreadsheet editing allows collaboration and publishing to a selected group of people, with version control.

Google considers communication to be a task, such that in GMail you see pop-up chats and chat histories which provide zero-latency discussions combined in communications tasks. If you want, you can have real-time discussions instead of waiting for e-mail responses if people are online in the contacts list. You can also organise all of your common tasks, e.g. using iGoogle’s widgets portal.

(3) Powerful. Having lots of computers in the Cloud means that it can do things that your PC cannot do. For example, Google Search is faster than searching in Windows or Outlook or Word. Of course, Google Search has to be be much faster, even though there are many more documents. In terms of how much storage is required, if there are 100 billion pages at 10 kB per page, that’s about 1000 TB of disk space. Cloud computing should have an infinite amount of disks / computation at its disposal. When you issue a query to the Google web search engine, it queries at least 1000 machines (potentially accessing 1000s of terabytes).

(4) Accessible. Universal search (“searchology”) was announced by Google last year. Traditional web page search does IR / TF-IDF / page rank stuff pretty well on the Web at large, but if you want to do a specific type of search, for restaurants, images, etc., web search isn’t necessarily the best option. It’s difficult for most people to get to the right vertical search page in the first place, since they usually can’t remember where to go. Universal search is basically a single search that will access all of these vertical searches.

This search requires simultaneously querying and searching over all the specific databases: news, images, videos, tens of such sources today, with potentially hundreds and thousands of them in the future. There are lots of these simultaneous searches which then get ranked, so it is even more computing intensive than current web search.

(5) Intelligent. Data mining and massive data analysis are required to give some intelligence to the masses of data available (massive data storage + massive data analysis = Google Intelligence).

In their machine translation work for, a trillion words were collected from bilingual and monolingual text, and they wanted to not only find various orders of words but also the mappings of words. Statistical models of translation were trained, and they saw how an English-Chinese pair could be aligned. Then, they needed to extract phrases and collect statistics (e.g. how often variations of a certain translation were being used, such as variations for latest / last / newest / most recent). As more training data is added, the quality improves. Context is also an important matter for consideration, and it provides an advantage for the phrase analysis part of Google’s translators. There are estimates that their translator is equivalent to a high-school student’s level of translator quality.

Lots of data can be processed by machine analysis to generate intelligence. But this needs to be combined with humans – via their collaboration and contributions – to change a mess / mass of photos or data or whatever into a very powerful combination. People and tools together can create intelligent knowledge. Applications like Google Earth are much more useful when people can contribute to them, e.g. by National Geographic sticking loads of high-res photos into it. Reviews, 3-D buildings, etc. can turn a tool from a bunch of pictures into something special. Creativity adds connections to data-centric applications, enabling intelligent combinations of content.

With all this data comes the issue of server costs. If you are trying to choose between buying $42,000 high-end servers or cheap PC-class servers for $2,500 each, you can get 33 times cost efficiency by going for the PC-class servers. You can get a 1000 CPU PC-class cluster for the same price as a high-end 64 CPU server, with possibly 30 times the performance (figures may be out of date).

Even though there is a lower cost, there still needs to be high reliability. Google search is mainly based on low-cost commodity PCs running Linux. Failures are expected in every system every day. If we assume that there are 20,000 machines, there’s typically a failure rate of 110 per day. Google has built a custom software layer that can tolerate failure. (They have also deployed a new data centre in just three days.)

(6) Programmable. This follows on from the previous description of data requirements. How does one program for 10,000 “flaky servers” in a Google farm? There needs to be: (i) fault tolerance, (ii) distributed shared memory (if storing every web page in, no one machine can store that, so multiples are required), and (iii) new programming paradigms required for storing stuff.

For (i) fault tolerance, Google uses GFS or distributed disk storage. Every piece of data is replicated three times. If one machine dies, a master redistributes the data to a new server. There are around 200 clusters (some with over 5 PB of disk space on 500 machines).

The “Big Table” is used for (ii) distributed memory. The largest cells in the Big Table are 700 TB, spread over 2000 machines.

MapReduce is the solution for (iii) new programming paradigms. It cuts a trillion records into a thousand parts on a thousand machines. Each machine will then load a billion records and will run the same program over these records, and then the results are recombined. While in 2005, there were some 72,000 jobs being run on MapReduce, in 2007, there were two million jobs (use seems to be increasing exponentially). Not everything is suitable for MapReduce, e.g. parallelising SVMs. Matrix operations can’t be split and re-glued together easily. For this, they use Incomplete Cholesky Factorisation.

Cloud computing needs new skills, especially when working with tens of thousands of machines as opposed to just one. The Academic Cloud Computing Initiative in the US and China (at Tsinghua) was launched by Google and IBM. Cloud computing is not just for web-based problems, but it can help provide solutions for scientific problems that were previously very hard to solve.

In terms of benefits, everything should just work, changing the way we work and play. IT should become “simple and safe”, by outsourcing IT to a “trusted shop” via a browser. Entrepreneurs should have new opportunities with this paradigm shift, being freed from monopoly-dominated markets as more cloud-based companies evolve that are powered by open technologies. Governments should leverage such “innovation-enabling platforms”, where people can effectively program tens of thousands of machines themselves. With $540 million of venture capital infused into China last year, Kai-Fu sees cloud-based computing as being a catalyst of economic growth. He finished up saying that cloud computing has arrived. “Embrace the Cloud!”

There was one question from the audience. The questioner said that Kai-Fu made cloud computing sound simple (i.e., it was well explained, not that the techologies or efforts were trivial). He asked what is the societal change rather than the technological change? Assume we have cloud-based computing, how we can start to encourage “cloud thinking” within society? The questioner works with universities looking at open access, trying to encourage people to share their intellectual outputs, but believes it is difficult to persuade knowledge workers to move their work into the Cloud. His question was, what can we do encourage cloud thinking and “cloud knowledge”?

Kai-Fu’s answer was firstly that cloud computing is not simple, rather it is incredibly complex, but we can learn from what has happened so far. There have been efforts to categorise world knowledge, e.g. Cycorp, which Kai-Fu said has not resulted in a success yet (however, I’ll note here that they are becoming part of the Linked Data initiative: as Kingsley Idehen said yesterday, “Yoda is awake”!). There has been some success in various question and answering systems with pieces of knowledge that can be mined and found. He stated that these were the two extremes, but believes that the answer lies somewhere in the middle: some organisation, but not too much. Wikipedia is a step in this direction, so he suggested bringing the question and answering approach and the Wikipedia approach closer together.

He said that two things would be required. Firstly, he saw the need for some kind of translation capability. There is so much knowledge in English, which spoils native English speakers. In China, people are also spoiled. However, for many other countries, there is very little local language content. If auto translation doesn’t work well, some kind of assisted translation is required. Secondly, there should be mobile endeavours to make knowledge available. There may also need to be some economic incentive for people to create and share content via their mobiles.

(More reviews at 1, 2 and 3.)

Really cool SIOC widget from Sindice (for WordPress)

I’ve installed the new Sindice SIOC widget, produced by Adam, Fabio and Giovanni from the Sindice team.

As you can see, if you look at the post author or click into any comments list, each user now has a speech bubble beside the username. Clicking on this bubble will show you posts, comments and topics created by that user across the “SIOC-o-sphere”.


You can also click on any arrow icon beside a link in a blog post to see where else it has been referenced, like this one.

There is a Sindice SIOC API available which serves as a gateway to SIOC data via the Sindice discovery and search services, enabling the verification of the presence of a user or a link on the SIOC-o-sphere as indexed within Sindice.

DataPortability lunch meetup in London / OpenSocial hackathon


I attended the DataPortability lunch meetup in London on Sunday (see link to some photos above), where I met up with DP enthusiasts including Tom Morris, Tony Haile, Chris Saad (founder), Cassandra Shanks, Imp, Julian Bond, Christian Scholz, and Sokratis Papafloratos. We had some great food and interesting discussions, including DP scenarios, the scope of DataPortability (is it more than just the Social Web?), SIOC, forthcoming announcements, and more…

Tom, Christian and I went to the OpenSocial hackathon at the BT centre afterwards. I spoke with organiser Michael Mahemoff briefly, and Dan Peterson invited us to attend the forthcoming Google I/O event in May. I also listened in to Dan Brickley and Cassie discuss connections between FOAF and the OpenSocial APIs. (Unfortunately, I missed the presentations which were on in the morning before I arrived in London.)

Tales from the SIOC-o-sphere #7

20080403a.png It’s been three months since my last round-up of all things SIOC-ed, so here is entry number seven in the series:

Previous SIOC-o-sphere articles:


Kingsley remixes my DataPortability slides as "Data Accessibility and Me: Introducing SIOC, FOAF and the Linked Data Web"

Kingsley Idehen told me on IRC that he remixed my presentation on DataPortability and SIOC from yesterday as Data Accessibility and Me: Introducing SIOC, FOAF and the Linked Data Web.

I’ve never had my slides remixed before, I’m honoured! Here’s the new version: