Category Archives: Japan

http://dmoz.org/Regional/Asia/Japan/

BlogTalk 2009 (6th International Social Software Conference) – Call for Proposals – September 1st and 2nd – Jeju, Korea

20090529a

BlogTalk 2009
The 6th International Conf. on Social Software
September 1st and 2nd, 2009
Jeju Island, Korea

Overview

Following the international success of the last five BlogTalk events, the next BlogTalk – to be held in Jeju Island, Korea on September 1st and 2nd, 2009 – is continuing with its focus on social software, while remaining committed to the diverse cultures, practices and tools of our emerging networked society. The conference (which this year will be co-located with Lift Asia 09) is designed to maintain a sustainable dialog between developers, innovative academics and scholars who study social software and social media, practitioners and administrators in corporate and educational settings, and other general members of the social software and social media communities.

We invite you to submit a proposal for presentation at the BlogTalk 2009 conference. Possible areas include, but are not limited to:

  • Forms and consequences of emerging social software practices
  • Social software in enterprise and educational environments
  • The political impact of social software and social media
  • Applications, prototypes, concepts and standards

Participants and proposal categories

Due to the interdisciplinary nature of the conference, audiences will come from different fields of practice and will have different professional backgrounds. We strongly encourage proposals to bridge these cultural differences and to be understandable for all groups alike. Along those lines, we will offer three different submission categories:

  • Academic
  • Developer
  • Practitioner

For academics, BlogTalk is an ideal conference for presenting and exchanging research work from current and future social software projects at an international level. For developers, the conference is a great opportunity to fly ideas, visions and prototypes in front of a distinguished audience of peers, to discuss, to link-up and to learn (developers may choose to give a practical demonstration rather than a formal presentation if they so wish). For practitioners, this is a venue to discuss use cases for social software and social media, and to report on any results you may have with like-minded individuals.

Submitting your proposals

You must submit a one-page abstract of the work you intend to present for review purposes (not to exceed 600 words). Please upload your submission along with some personal information using the EasyChair conference area for BlogTalk 2009. You will receive a confirmation of the arrival of your submission immediately. The submission deadline is June 27th, 2009.

Following notification of acceptance, you will be invited to submit a short or long paper (four or eight pages respectively) for the conference proceedings. BlogTalk is a peer-reviewed conference.

Timeline and important dates

  • One-page abstract submission deadline: June 27th, 2009
  • Notification of acceptance or rejection: July 13th, 2009
  • Full paper submission deadline: August 27th, 2009

(Due to the tight schedule we expect that there will be no deadline extension. As with previous BlogTalk conferences, we will work hard to endow a fund for supporting travel costs. As soon as we review all of the papers we will be able to announce more details.)

Topics

Application Portability
Bookmarking
Business
Categorisation
Collaboration
Content Sharing
Data Acquisition
Data Mining
Data Portability
Digital Rights
Education
Enterprise
Ethnography
Folksonomies and Tagging
Human Computer Interaction
Identity
Microblogging
Mobile
Multimedia
Podcasting
Politics
Portals
Psychology
Recommender Systems
RSS and Syndication
Search
Semantic Web
Social Media
Social Networks
Social Software
Transparency and Openness
Trend Analysis
Trust and Reputation
Virtual Worlds
Web 2.0
Weblogs
Wikis
Reblog this post [with Zemanta]
Advertisement

At the blognation Japan launch party last week

Last Friday, I attended the blognation Japan launch party (organised by editor Robert Sanzalone) at the Outback Steakhouse in Tokyo along with Eyal and Armin.

20071123a.png

I really enjoyed the evening, had a great chat with Rob Cawte (see his report here) about the Semantic Web and community wikis, and also talked to John Foster, Yusuke Kawasaki, Andrew Shuttleworth, Robert, and some others whose names have either escaped me or whose business cards I did not get.

You can read more at blognation Japan.

"Made in Japan: What Makes Manga Japanese? And Why Western Kids Love It"

Since I’m interested in manga through running the boards.jp / Manga to Anime site, I found out about a talk when I was in Tokyo last week entitled “Made in Japan: What Makes Manga Japanese? And Why Western Kids Love It”.

It was held by the Society of Children’s Book Writers and Illustrators in Japan, and featured Roland Kelts (photo), author of “Japanamerica“, and Masakazu Kubo (drawing Pikachu here), an executive of Shogakukan and producer of the Pokémon movie series. The talk covered “the nuts and bolts of the craft of manga and […] the nature of its appeal beyond Japan”, and was followed by a Q&A session.

The speeches were pretty interesting. Kelts started off by giving an overview of the history of manga, ranging from the 40s and 50s art of Osamu Tezuka to its current penetration of American bookstores. He then turned over to Kubo-san for some industry perspective, including details of how a week’s worth of manga used to correspond to just 15 minutes on screen, and the fact that anime has permeated other countries in some part because it is easier (and hence cheaper) to dub in comparison to other animation (it has less precise movements of the mouth).

I asked the speakers if something like Brewster Kahle’s book archiving / book mobile project (which I blogged about last week; see video here) would have relevance to the world of manga, since Kubo-san mentioned that a lot of manga is now being digitised. Kubo said that since there are various upload / download legalities with respect to currently-licensed manga, this would be difficult, but that anything that fell outside the (previously) 50-year copyright span could potentially be provided in such a manner.

I enjoyed the session, and even found a picture of the back of my head and boards.jp t-shirt on the Japanamerica blog! My own photos are here.

Web 2.0 Expo Tokyo: Eric Klinker – “Web 2.0 and content delivery”

My last report from the Web 2.0 Expo Tokyo event is about the talk by Eric Klinker, chief technical officer for BitTorrent Inc. (I met Eric and his colleague Vincent Shortino briefly on Thursday evening), who gave a talk about “the power of participation”.

The market for IP video is huge, and a Cisco report called the “Exabyte Era” shows that P2P, which currently accounts for 1014 PB of traffic each month, will continue to rise with a 35% year-over-year growth rate. User-contributed computing is happening right now, and is delivering over half of the Internet traffic today.

A new order of magnitude has arrived, the exabyte (EB). One exabyte is 2^60 bytes, which is 1 billion gigabytes. If you wanted to build a website that would deliver 1 EB per month, you would need to be able to transfer at a rate of 3.5 TB/s (assuming 100% network utilisation). 1 EB corresponds to 3,507,000 months or 292,000 years of online TV (stream encoded at 1 MB/s), 64,944 months or 5,412 years of Blu-ray DVD (maximum standard 54 MB/s), 351 months or 29 years of online radio traffic, 20 months or 1.7 years of YouTube traffic, and just one month of P2P traffic.

If you have a central service and want to deliver 1 EB, you would need about 6.5 MB/s peak bandwidth, and 70,000 servers requiring about 60-70 megawatts in total. At a price of $20 per MB/s, it would cost about $130 million to run per month!

The “Web 2.0” way is to use peers to deliver that exabyte. However, not every business is ready to be governed by their userbase entirely. There is an opportunity to take a hybrid model approach. BitTorrent are a content-delivery network that can enable Internet-based businesses to use “the power of participation”. 55 major studios and 10,000 titles are now available via BitTorrent.com (using BitTorrent DNA). Also, the BitTorrent SDK allows BT capability to be added to any consumer electronic device.

He then talked about the Web 2.0 nature of distributed computing, and how we can power something that wouldn’t or couldn’t be powered otherwise. For example, Electric Sheep is a distributed computing application that renders a single frame on your machine for a 30-second long screensaver, which you can then use. Social networks also have a lot of machines, but the best example of distributed computing is search. Google has an estimated 500k to 1M servers, corresponding to $4.5B in cumulative capex (that’s capital expenditure to you and me) or 21% of their Q2 net earnings (according to Morgan Stanley). And yet, search is still not a great experience today, since you still have a hard time finding what you want. Search engines aren’t contextual, they doesn’t see the whole Internet (the “dark web”), they aren’t particularly well personalised or localised, and they aren’t dynamic enough (i.e, they cannot keep up with most Web 2.0 applications [although I’ve noticed that Google is reflecting new posts from my blog quite quickly]).

The best applications involve user participation, with users contributing to all aspects of the application (including infrastructure). Developers need to consider how users can do this (through contributed content, code or computing power). As Eric said, “harness the power of participation, and multiply your ability to deliver a rich and powerful application.”

Web 2.0 Expo Tokyo: Håkon Wium Lie – “The best Web 2.0 experience on any device”

There was a talk at the Web 2.0 Expo Tokyo last Friday afternoon by Håkon Wium Lie, chief technical officer with Opera Software. He has been working on the Web since the early nineties, and is well known for his foundational work on CSS. Opera is headquartered (and Håkon is based) in Norway.

Håkon (pronounced “how come”) started by talking about the Opera browser. Opera has browsers for the desktop, for mobiles and for other devices (e.g., the Nintendo Wii and the OLPC $100 laptop). He thinks that the OLPC machine will be very important (he also brought one along to show us, pictured), and that the browser will be the most important application on this device.

Another product that Opera are very proud of is Opera Mini, which is a small (100k) Java-based browser. Processing of pages takes place via proxy on a fixed network machine, and then a compressed page is sent to Opera Mini.

He then talked about new media types on the Web. Håkon said that video needs to be made into a “first-class citizen” on the Web. At the moment, it takes a lot of “black magic” and third-party plugins and object tags before you can get video to work in the browser for users. There are two problems that need to be solved. In relation to the first problem – how videos are represented in markup – Opera proposed that the <video> element be added to the HTML5 specification. The second problem is in relation to a common video format. The Web needs a baseline format that is based on an open standard. Håkon stated that there is a good candidate in Ogg Theora, which is free of licensing fees, and in HTML5 there may be a soft requirement or recommendation to use this format. He showed some nice mockups of Wikipedia pages with embedded Ogg videos. You can also combine SVG effects (overlays, reflections, filters, etc.) with these video elements.

He then talked about the HTML5 specification: the WHAT working group was setup in 2004 to maintain HTML, and a W3C HTML working group was also established earlier this year. HTML5 will include new parsing rules, new media elements, some semantic elements (section, article, nav, aside), and also some presentational elements will be removed (center, font).

Håkon next described how CSS is also evolving. As an example, he showed us some nice screenshots from the css Zen Garden, which takes a boring document and asks people to apply their stylesheets to change the look. Most of them use some background images to stylize the document (rather than changing the fonts dramatically).

CSS has a number of properties to handle fonts and text on the Web. Browsers have around ten fonts that can be viewed on most platforms (i.e., Microsoft’s core free fonts). But there are a lot more fonts out there, for example, there are 2500 font families available on Font Freak. Håkon says that he wants to see more browsers being able to easily point to and use these interesting fonts. In CSS2, you can import a library of fonts, and he reiterated his hope that fonts residing on the Web will be used more in the future.

Another use for CSS3 is in professional printing. Using the Prince tool, Håkon has co-written a book on CSS using CSS3. CSS3 can allow printing requirements to be specified such as multiple columns, footnotes, leaders, etc.

He then talked about the Acid2 test. Acid2 consists of a single web page, and if a browser renders it correctly, it should show a smiley face. Every element is positioned by some CSS or HTML code with some PNGs. Unfortunately, Internet Explorer performs worst in this test. But I also tested out Firefox 2 and got something distorted that looked like this.

The last thing he talked about was 3D. He gave a nice demo of Opera with some JavaScript that interfaces with the OpenGL engine to render a PNG onto a cube and rotates it. He also showed a 3D snake game from Opera (only a hundred or two lines of code), which is available at labs.opera.com.

I really enjoyed the forward-looking nature of Håkon’s presentation, and said hello briefly afterwards to say thanks for Opera Software’s (via Chaals and Kjetil) involvement in our recent SIOC member submission to the W3C.

Web 2.0 Expo Tokyo: Joe Keller – “Understanding and applying the value of enterprise mashups to your business”

(Another delayed report from a talk last Friday at the Web 2.0 Expo.)

Joe Keller is the marketing officer with Kapow, so I was expecting a marketing talk, but there was a nice amount of technical content to keep most happy. Joe was talking about “getting business value from enterprise mashups”. Kapow started off life as a real-estate marketplace in Europe ten years ago, but moved towards its current focus of mashups after 2002. Referencing Rod Smith, whom I saw last year at BlogTalk 2006, mashups allow content to be generated from a combination of rich interactive applications, do-it-yourself applications plus the current scripting renaissance.

According to McKinsey, productivity gains through task automation have peaked, and the next productivity wave will be data-oriented as opposed to task-oriented. Joe says that Web 2.0 technologies are a key to unlocking this productivity. He also talked about two project types: systematic projects are for conservative reliability, whereas opportunistic projects (or “situational applications” to use the IBM terminology) are for competitive agility. Mashups fit into the latter area.

The term mashup can apply to composite applications, gadgets, management dashboards, ad hoc reporting, spreadsheets, data migration, social software and content aggregation. The components of a mashup are the presentation layer, logic layer, and the data layer (access to fundamental or value-added data). In this space, companies are either operating as mashup builders or mashup infrastructure players like Kapow.

The main value of mashups is in combining data. For example, HousingMaps, the mashup of Google Maps and data from Craig’s List, was one of the first interesting mashups. The challenge is that mashups are normally applied to everyone’s data, but if you’re looking for a house, you may want to filter by things like school district ratings, fault lines, places of worship, or even by proximity to members of your LinkedIn / MySpace network, etc.

He then listed some classes of mashup data sources. In fundamental data, there’s structured data, standard feeds, data that can be subscribed to, basically stuff that’s open to everyone. The value-added data is more niche: unstructured data, individualised data, vertical data, etc. The appetite for data collection is growing, especially around the area of automation to help organisations with this task. The amount of user-generated content (UGC) available is a goldmine of information for companies, enabling them to create more meaningful time series that can be mashed up quickly into applications. According to ProgrammableWeb, there are now something like 400 to 500 mashup APIs available, but there are 140 million websites according to NetCraft, so there is a mismatch in terms of the number of services available to sites.

Kapow aims to turn data into business value, “the right data to the right people at the right time.” Their reputation management application allows companies to find out what is being said about a particular company through blogs, for sentiment analysis. They also provide services for competitive intelligence, i.e., how do you understand the pricing of your competitors in an automated fashion. Asymmetric intelligence is another service they provide for when people are looking for a single piece of information that one person has and no-one else possesses. Business automation is where mashups are being used to automate internal processes, e.g., to counteract the time wasted by “swivel-chair integration” where someone is moving from one browser on one computer to another and back again to do something manually. Finally, opportunistic applications include efforts whereby companies are aiming to make users part of their IT “team”, i.e., by allowing users to have access to data and bringing this into business processes: Web 2.0 infrastructure allows companies to use collective wisdom using Kapow technologies.

About RSS, Joe said that almost every executive in every corporation is starting to mandate what feeds he wants his company to provide (and RSS feeds are growing as quickly as user-generated content in blogs, wikis, etc.). Kapow’s applications allows you to create custom RSS feeds, but he gave a short demo of using Kapow to build an on-the-fly REST service. His service produced the quote for a company’s stock price by extracting identified content from an area of a web page, which could then be incorporated into other applications like an Excel spreadsheet. I asked Joe if it is difficult to educate end users about REST and RSS. He pointed to the ease with which most people can add feeds to iGoogle and said that its becoming easier to explain this stuff to people.

Kapow’s server family allow portal creation, data collection (internal and external), and content migration via mashups which Joe reckons are often more useful than static migration scripts since they can be customised and controlled. Kapow also provide a free “openkapow” API and site for developers to share how they build mashups and feeds.

In summary, Joe gave these take aways:

  • The next business productivity wave will be via data and know-how automation, not routine task automation.
  • Knowledge workers need self-service mashup technology to take advantage of this.
  • Access to critical (value-added) data can create a competitive edge.
  • Web 2.0 technologies complement existing IT systems to maintain the competitive edge.

Web 2.0 Expo Tokyo: Scott Dietzen – “The impact of ’software as a service’ and Web 2.0 on the software industry”

Scott Dietzen, president and CTO of Zimbra, gave the third talk last Friday at Web 2.0 Expo Tokyo. Zimbra has been on the go for four years (so they are Web 2.0 pioneers), and embarrassingly I told Scott that I only found out about them very recently (sorry!). Scott’s aim for this talk was to share the experience of having one of the largest AJAX-based web applications (thousands of lines of JavaScript code). Since their status has changed since they originally signed up for the conference, he mentioned that Yahoo! are the new owners of Zimbra. But Scott affirmed that Zimbra will remain open source and committed to their partners and customers who have brought Zimbra to where it is.

Web 1.0 started off for consumers, but began to change the ways in which businesses used technology. A handful of technologies allowed us to create a decade of exciting innovations. With Web 2.0, all of us have become participants, often without realising the part we play on the Web – clicking on a search result, uploading a video or social network page – all of this contributes to and changes this Web 2.0 infrastructure. This has enabled phenomenons like Yahoo! Flickr and open source, where a small group of people get together, put a basic model forward, and then let it loose. As a result of many contributions from around the world, we now get these phenomena. There are 11,000 participants in the Zimbra open source community, vastly more than the personpower Zimbra or Yahoo! could put into the project.

Mashups may be the single best thing for Zimbra. AJAX has won over much of the Internet because websites have voted with their feet, and according to Scott “it actually works”. Scott was formerly part of the WebLogic team, and one of that team said recently that there was a special place in heaven for whoever in Zimbra had the patience to get all of that JavaScript programming working properly. There are currently 50 or 60 AJAX development toolkits, but Scott hopes that the industry can rally around a smaller number, especially open-source technologies which offer long-term portability across all the leading platforms.

Another issue is that browsers weren’t initially designed for this kind of “punishment”, so it’s taken time for browsers to become solid productive AJAX containers. They can still do better, and Scott said he is excited to see the emergence of JIT-compilation technology that will allow browsers to operate at least two to three times faster.

With Zimbra, there is no caching of user data within the client. So in a public kiosk, there will be no security leaks under AJAX. The challenge is that the source code is now available to anyone with a web browser. It is crucial to protect against ever executing any JS code that is external to your application. For the first time, we have a universal user interface that is connected to and allows one to mix and match various UIs together: Scott reckons we’ve only just begun to touch the surface of what can be done with mashups.

There are four techniques to speeding up AJAX applications. Firstly, combine stuff together where possible. Then, compress the pages to shrink the required bandwidth for smaller pipes. Next is caching, to avoid having to re-get the JS and re-interpret it (in Zimbra, they include dates for when the JS files were last updated). The last and best technique is “lazy loading”. Zimbra is a very large JS application in one page. By breaking it up into several modules that can be loaded on demand, one can reduce the time from when you can first see and start using the application.

Offline AJAX is a fundamental change but offers many opportunities. You can have the web experience when on a flight or when far away from the data centre that you normally use. Zimbra is faster to use as an offline version while synchronising back to California, rather than having to wait for every operation to cross the ocean and back again. For Zimbra, they took the Java server code and produced a micro version to install on the local desktop. This allows one to preserve all the “smarts”, and make them available to desktop users. Offline isn’t for everything, for example, when data is so secure that it shouldn’t be cached on a PC, or if old data gets stale and no longer makes sense, etc. Also, you have to solve synchronisation issues. You can be changing your mailbox while on a plane, but new mail is arriving in the meantime and some reconciliation has to take place. And there is also the problem of desktop apps in general: once you have code, how do you upgrade it, etc.

In Web 1.0, UI logic, business logic, and data logic were all supposed to be separated. They could fix some (X)HTML and SQL schemas to aid with this, but in practice people didn’t modularise. In Web 2.0, there is an effort to have clearer separtions (due to mashups, feeds, etc.) between client / UI logic, server / business logic, and data logic. It’s better to modularise, and getting people to move towards a more modular architecture will allow you to get more value from your business applications, and will also allow you to “play in this Web 2.0 world”. In relation to SOA, Scott said that we are perhaps moving from something like ISO, where there’s one big document with a 10,000 page specification, to something almost as problematic, where there are one page specs for 10,000 or more web services. There is a well known theory that you can’t start with something complex and expeect everyone to suddenly start using it.

He then focused on software as a service, or SAAS. SAAS is inherent in the Web, since the service is off somewere else when you open up a browser page. He also talked about the opportunities when businesses are put together in the same data centres. This results in multi-tenancy, and the ability or need to set up a single server farm with tens of thousands of companies’ data all intermixed in a secure way without compromising each other. There is a need to think about how to manage so many users together at once. This is usually achieved through a common class of service for all these users as a whole. Administation should be delegated, to some extent, an important aspect to get right. You may also may want to allow users to customise and extend their portion of the applications they are using if appropriate.

Scott next talked about convergence. E-mail has made great progress in becoming part of the web experience (Hotmail, GMail, Yahoo! Mail, etc.). The same thing is now happening to IM, to VoIP, to calendars, etc. For example, a presence indicator next to an e-mail inbox shows if each user is available for an IM or a phone call. Or the reverse: someone tries to call or IM you, but you can push back and say that you just want them to e-mail you because you’re not available right now. Being able to prioritise communications based on who your boss is, who your friends are, etc., is a crucial aspect of harnessing the power of these technologies. On voice, we want to be able to see our call logs, to use these to dial our phone, e.g., you want to just click on a person and call that person. You may also want to forward segments from that voice call over e-mail or IM.

In Japan, they have had compelling applications for mobile devices for a lot longer than the rest of the world. Scott showed a demonstration of the Zimbra experience on an iPhone. Scott finished by saying that everything about these new technologies has to be used right to make someone’s life better, and to make usage more compelling. Innovation, or how deeply we can think about how the future ought to look like, is very important.

Seiji Sato from Sumitomo, whose subsidiary Presidio STX invested in Zimbra last year, then spoke. He started by mentioning that over 100 corporations are now using Zimbra. Sumimoto hopes to contribute to “synergy effects” in Japan’s Web 2.0 community by mashing up services from various businesses and by providing the possibility to extend and utilise available company competencies.

To expand Zimbra’s usage in Japan, Sumitomo have been working with their associate company FeedPath and other Web 2.0 business, providing Zimbra localisation and organising a support structure both for this market and for early adopters. Sato said that although Sumitomo are not a vendor or manufacturer, they feel that the expansion of Web 2.0 is quite viable and very important.

After the talk I asked Scott if Zimbra would be looking at leveraging any widgets that will be developed under the much-hyped OpenSocial initiative within the Zimbra platform, since it seemed to me that there is a natural fit between the implicit social networking information being created within Zimbra and the various widgets that are starting to appear (and I guess since they are just in XHTML / JS, there’s a technology match at least). Scott told me that Zimbra already has around 150 plugins, and that the ability to layer mashups on top of this implicit social network information is certainly very important to them. He was unsure if OpenSocial widgets would fit to Zimbra since their e-mail platform is quite different from SNS platforms, but he did say [theoretically, I should add, as there are no plans to do so] that if such widgets were ported to work with Zimbra, they would probably require extensive testing and centralised approval rather than just letting anybody use whatever OpenSocial widget they wanted to within Zimbra.

Web 2.0 Expo Tokyo: Rie Yamanaka – “A paradigm shift in advertisement platforms: the move into a real Web 2.0 implementation phase”

The second talk at the Web 2.0 Expo Tokyo this morning was by Rie Yamanaka, a director with Yahoo!’s commercial search subsidiary Overture KK. (I realised after a few minutes of confusion that Ms. Yamanaka’s speech was being translated into English via portable audio devices.)

According to Yamanaka, Internet-based advertising can be classified into three categories: banners and rich media, list-type advertisements (which was the central topic of her presentation), and mobile advertising (i.e., a combination of banner and listings grouped onto the mobile platform).

First of all, she talked about advertisement lists. Ad lists are usually quite accurate in terms of targetting since they are shown and ranked based on a degree of relevance. Internet-based ads (when compared with TV, radio, etc.) are growing exponentially. This increase is primarily being driven by ad lists and mobile ads. In yesterday’s first keynote speech with Joi Ito, the discussion mentioned that the focus has already shifted a lot towards internet advertising in the US, perhaps more so than Japan, but that this is now occuring in Japan too.

She then talked about the difference between banners and ad lists. In the case of banner ads, what matters is the number of impressions, so the charge is based on CPM (cost per mille or thousand), and some people think of it as being very “Web 1.0”-like. However, ad lists, e.g., as shown in search results, are focussed more on CPC (cost per click), and are often associated with Web 2.0.

Four trends (with associated challenges) are quite important and are being discussed in the field of Internet advertising: the first is increased traceability (one can track and keep a log of who did what); the next is behavioural or attribute targetting, which is now being implemented in a quite fully-fledged manner; third are APIs that are now entering the field of advertisement; and finally (although its not “Web 2.0”-related in a pure sense) is the integration between offline and online media, where the move to search for information online is becoming prevalent.

  • With traceability, you can get a list of important keywords in searches that result in subsequent clicks. Search engine marketing can help to eliminate the loss of opportunities that may occur through missed clicks.
  • Behavioural targetting, based on a user’s history of search, can give advertisers a lot of useful information. One can use, for example, information on gender (i.e., static details) or location (i.e., dynamic details, perhaps from an IP address) for attribute-based targetting. This also provides personalised communication with the users, and one can then deploy very flexible products based on this. Yahoo! Japan recently announced details of attribute-based advertising for their search which combines an analysis of the log histories of users and advertisers.
  • As in yesterday’s talk about Salesforce working with Google, APIs for advertising should be combined with core business flows, especially when a company provides many products, e.g. Amazon.com or travel services. For a large online retailer, you could have some logic that will match a keyword to the current inventory, and the system should hide certain keywords if associated items are not in stock. This is also important in the hospitality sector, where for example there should be a change in the price of a product when it goes past a best-before time or date (e.g., hotel rooms drop in price after 9 PM). With an API, one can provide very optimised ads that cannot be created on-the-fly by humans. Advertisers can take a scientific approach to dynamically improve offerings in terms of cost and sales.
  • Matching online information to offline ads, while not directly related to Web 2.0, is important too. If one looks at TV campaigns, one can analyse information about how advertising the URL for a particular brand can lead to the associated website. Some people may only visit a site after seeing an offline advertisement, so there could be a distinct message sent to these types of users.

In terms of metrics, traditionally internet-based ads have been classified in terms of what you want to achieve. In cases where banners are the main avenue required by advertisiers, CPM is important (if advertising a film, for example, the volume of ads displayed is important). On the other hand, if you actually want to get your full web page up on the screen, ranking and CPC is important, so the fields of SEO and SEM come into play.

Ms. Yamanaka then talked about CPA (cost per acquisition), i.e., how much it costs to acquire a customer. The greatest challenge in the world of advertising is figuring out how much [extra] a company makes as a result of advertising (based on what form of campaign is used). If one can try and figure out a way to link sales to ads, e.g., through internet conversion where a person moves onwards from an ad and makes a purchase), then one can get a measure of the CPA. For companies who are not doing business on the Web, its hard to link a sale to an ad (e.g., if someone wants to buy a Lexus, and reads reference material on the Web, he or she may then go off and buy a BMW without any traceable link). On the Web, can get a traceable link from an ad impression to an eventual deal or transaction (through clicking on something, browsing, getting a lead, and finding a prospect).

She explained that we have to understand why we are inviting customers who watch TV onto the Web: is it for government information, selling products, etc. The purpose of a 30-second advert may actually be to guide someone to a website where they will read stuff online for more than five minutes. With tracebility, one can compare targetted results and what a customer did depending on whether they came from an offline reference (she didn’t say it but I presume through a unique URL) or directly online. Web 2.0 is about personalisation, and targeting internet-based ads towards segmented usergroups is also important (e.g., using mobile or PC-based social network advertising for female teens in Tokyo; for salarymen travelling between Tokyo and Osaka, it may be better to use ad lists or SMS advertising on mobiles or some other format; and for people at home, it may be appropriate to have a TV ad at a key time at night where there’s a high probability of them going and carrying out a web search for the associated product), and so there’s a need to find the best format and media.

She again talked about creating better synergies between offline and online marketing (e.g., between a TV-based ad and an internet-based ad). If a TV ad shows a web address, it can result in nearly 2.5 times more accesses than can be directly obtained via the Internet (depending on the type of products being advertised), so one can attract a lot more people to a website in this way. Combining TV and magazines, advertisers can prod / nudge / guide customers to visit their websites. There is still a lot of room for improvement in determining how exactly to guide people to the Web. It depends on what a customer should get from a company, as this will determine the type of information to be sent over the Web and whether giving a good user experience is important (since you don’t want to betray the expectation of users and what they are looking for). Those in charge of brands for websites need to understand how people are getting to a particular web page as there are so many different entry points to a site.

Ms. Yamanaka referenced an interesting report from comScore about those who pre-shop on the Web spending more in a store. These pre-shoppers spend 41% more in a real store if they have seen internet-based ads for a product (and for every $1 these people spent online, they would spend an incremental $6 in-store).

There’s also a paradigm shift occuring in terms of ubiquitous computing, which is already a common phenomenon here in Japan. At the end of her presentation, she also referenced something called “closed-loop marketing” which I didn’t really get. But I did learn quite a bit about online advertising from this talk.

Web 2.0 Expo Tokyo: Evan Williams, co-founder of Twitter – “In conversation with Tim O’Reilly”

The first talk of the day was a conversation between Tim O’Reilly and Evan Williams.

Evan started off by forming a company in his home state of Nebraska, then moved to work for O’Reilly Media for nine months but says he never liked working for other people. A little later on he formed Pyra, which after a year had Blogger as its main focus in 1999. They ran out of money in the dot com bust, had some dark times and he had to lay off a team of seven in 2000. He continued to keep it alive for another year and built it back up. Then Evan started talks with Google and sold Blogger to them in 2003, continuing to run Blogger at Google for two years. He eventually left Google anyway, says that it was partially because of his own personality (working for others), and also because within Google Blogger was a small fish in a big pond. Part of the reason for selling to Google in the first place was that they had respect for them, it was a good working environment, and also they would be providing a stable platform for Blogger to grow (eventually without Evan). But in the end, he felt that he’d be happier and more effective outside Google.

So he then went on to start Odeo at Obvious Corp. Because of timing and the fact that they got a lot of attention, they raised a lot of money very easily. He ran Odeo as it was for a year and a half. With Jack Dorsey at Odeo / Obvious, they began the Twitter project. Eventually Evan bought out his investors when he realised Odeo had possibly gotten it wrong as it just didn’t feel right in its current state.

Tim asked Evan what is Twitter and what Web 2.0 trends does it show off? Evan says its a simple service described by many as microblogging (a single Twitter message is called a tweet). That is, blogging based on very short updates with the focus on real-time information, “what are you doing?” Those who are interested in what someone is doing can receive updates on the Web or on their mobile. Some people call it “lifestreaming”, according to Tim. Others think it’s just lots of mundane, trivial stuff, e.g. “having toast for breakfast”. Why it’s interesting isn’t so much because the content is interesting but rather because you want to find out what someone is doing. Evan gave an example of when a colleague was pulling up dusty carpets in his house, he got a tweet from Evan saying “wine tasting in Napa”, so that its almost a vision of an “alternative now”. Through Twitter, you can know very minute things about someone’s life: what you’re thinking, that you’re tired, etc. Historically, we have only known that kind of information for a very few people that you are close to (or celebrities!).

The next question from Tim was how do you design a service that starts off as fun but becomes really useful? A lot of people’s first reaction in relation to Twitter is “why would I do that”. But then people try it and find lots of other uses. It’s much the same motivation (personal expression and social connection) as other applications like blogging, according to Evan. A lot of it comes from the first users of the application. As an example, Twitter didn’t have a system allowing people to comment, so the users invented one by using the @ sign and a username (e.g., @ev) to comment on other people’s tweets (and that convention has now spread to blog comments). People are using it for conversation in ways that weren’t expected. [Personal rant here, in that I find the Twitter comment tracking system to be quite poor. If I check my Twitter replies, and look at what someone has supposedly replied to, it’s inaccurate simply because there is no direct link between a microblog post and a reply. It seems to assume by default that the recipient’s “previous tweet by time” is what a tweet sender is referring to, even when they aren’t referring to anything at all but rather are just beginning a new thread of discussion with someone else using the @ convention.]

Tim said that the team did a lot for Twitter in terms of usability, by offering an API that enabled services like Twittervision. Evan said that their API has been suprisingly successful, and there are at least a dozen desktop applications, others that extract data and present it in different ways, various bots that post information to Twitter (URLs, news, weather, etc.), and more recently a timer application that will send a message at a certain time period in the future for reminders (e.g., via the SMS gateway). The key thing with the API is to build a simple service and make it reusable to other applications.

Right now, Twitter doesn’t have a business model: a luxury at this time, since money is plentiful. At some point, Tim said they may have to be acquired by someone who sees a model or feels that they need this feature as part of their offering. Evan said they are going to explore this very soon, but right now they are focussed on building value. A real-time communication network used by millions of people multiple times a day is very valuable, but there is quite a bit of commercial use of Twitter, e.g., Woot (the single special offer item per day site) have a lot of followers on Twitter. It may be in the future that “for this class of use, you have to pay, but for everyone else it’s free”.

20% of Twitter users are in Japan, but they haven’t internationalised the application apart from having double-byte support. Evan says they want to do more, but they are still a small team.

Tim then asked how important is it to have rapid application development for systems like Twitter (which is based on Ruby on Rails)? Most Google’s applicationss are in Java, C++ and Python, and Evan came out of Google wanting to use a lightweight framework for such development since there’s a lot of trial and error in creating Web 2.0 applications. With Rails, there are challenges to scaling, and since Twitter is one of the largest Rails applications, there are a lot of problems that have yet to be solved. Twitter’s developers talk to 37 Signals a lot (and to other developers in the Rails community); incidentally, one of Twitter’s developers has Rails commit privileges.

Tim says there’s a close tie between open source software and Web 2.0. Apparently, it took two weeks to build the first functional prototype of Twitter. There is a huge change in development practice related to Web 2.0. A key part of Web 2.0 is a willingness to fail, since people may not like certain things in a prototype version. One can’t commit everything to a single proposition, but on the flip side, sometimes you many need to persist (e.g., in the case of Blogger, if you believe in your creation and it seems that people like it).

So, that was it. It was an interesting talk, giving an insight into the experiences of a serial Web 2.0 entpreneur (of four, or was it five, companies). I didn’t learn anything new about Twitter itself or about what they hope to add to their service in the future (apart from the aformentioned commercial opportunities), but it’s great to have people like Evan who seem to have an intuitive grasp on what people find useful in Web 2.0 applications.

Day 1 (or at least half of it) at the Web 2.0 Expo Tokyo

After much fupping searching of bags, bodies and shoes and confiscating of my soft drinks in Busan Airport, I made it to the Cerulean Tower Tokyu Hotel in Shibuya this afternoon for the Web 2.0 Expo Tokyo where I attended some of today’s events. (I missed this morning’s English-language sessions unfortunately; I was looking forward to the ones with Joi Ito and Tim Bray.)

So I began by going to the exhibition demonstrations in the afternoon: after talking to Paul Chapman from Wall Street Associates (whom I met with his colleague Ross Sharrott) about social software and the Semantic Web, Paul recommended that I go see the Springnote exhibition from Korean-based NCsoft. Steve Kim from the company gave me a nice demonstration of their Springnote WYSIWYG wiki system for writing, organising and sharing personal notes. At some of the other stands, I also learned more than I previously knew about the Zimbra mashup-enabled e-mail application and the Lotus Connections enterprise social networking system from IBM.

After that, I met a bunch of cool people at the Web 2.0 Expo Tokyo cocktail party: Jennifer Pahlka (the Web 2.0 Expo organiser with CMP Technology, who’s just after recovering from a busy sister event in Berlin), Tim O’Reilly (with whom I had a short but interesting conversation about how the Semantic Web can work with Web 2.0; that it can be about using semantics to create the connections between existing community contributions on various social sites rather than requiring a load of unrewarding manual slogging), Brady Forrest (organising chair for Web 2.0 Expo and a number of other conferences with O’Reilly Media), Evan Williams and Sara Morishige (the co-founder of Pyra Labs / Odeo / Twitter and his wife whom I met very briefly), Web 2.0 Expo Tokyo advisory board members Seiji Sato and Shuji Honjo, venture capital guru Masashi Kobayashi, and also project manager Fumi Yamazaki from Joi Ito’s Lab.

20071115a.jpg Talking with Fumi, we agreed that there’s not enough social media being produced by attendees at the event, so we endeavoured to make up for it tomorrow. To this end, because I left our big FZ7 at home in Ireland and since I only have my camera phone with me, I went exploring in Shibuya and got a nice cheap wi-fi enabled Nikon COOLPIX S51c for $239 (which is a good $50 cheaper than the average price online; my first Blade Runner-like Tokyo skyline picture is shown on the right). I’ll be snapping like mad tomorrow, and I’d also encourage people to use the “web2expotokyo” tag for their event-related content: let’s see if we can gather some stuff from these two days on Flickr, Technorati, SlideShare, etc.

I’m looking forward to these talks tomorrow (I’ve “nativised” the literal translations of the presentation titles given on this page):

  • 10:00 – Evan Williams, co-founder of Twitter – “In conversation with Tim O’Reilly”
  • 10:55 – Rie Yamanaka, a director with Yahoo!’s commercial search subsidiary Overture KK – “A paradigm shift in advertisement platforms: the move into a real Web 2.0 implementation phase”
  • 11:50 – Scott Deitzen, president and CTO of Zimbra – “The impact of ‘software as a service’ and Web 2.0 on the software industry”
  • 14:35 – Joe Keller, marketing officer with Kapow – “Understanding and applying the value of enterprise mashups to your business”
  • 16:35 – Håkon Wium Lie, CTO with Opera and the creator of CSS – “The best Web 2.0 experience on any device”
  • 17:30 – Eric Klinker, CTO with BitTorrent Inc. (I met Eric and Vincent Shortino this evening) – “Web 2.0 and content delivery”

Then, tomorrow (Friday) night, the blognation Japan launch party will take place here in Shibuya. Check out the Upcoming or Facebook pages for more details and sign up if you’re interested. (Oh, and on Saturday, since I’m an anime and manga fan, I plan to go to see the talk “Made in Japan: What Makes Manga Japanese – And Why Western Kids Love It” that’s on here too!)