Category Archives: Japan

http://dmoz.org/Regional/Asia/Japan/

BlogTalk 2009 (6th International Social Software Conference) – Call for Proposals – September 1st and 2nd – Jeju, Korea

20090529a

BlogTalk 2009
The 6th International Conf. on Social Software
September 1st and 2nd, 2009
Jeju Island, Korea

Overview

Following the international success of the last five BlogTalk events, the next BlogTalk – to be held in Jeju Island, Korea on September 1st and 2nd, 2009 – is continuing with its focus on social software, while remaining committed to the diverse cultures, practices and tools of our emerging networked society. The conference (which this year will be co-located with Lift Asia 09) is designed to maintain a sustainable dialog between developers, innovative academics and scholars who study social software and social media, practitioners and administrators in corporate and educational settings, and other general members of the social software and social media communities.

We invite you to submit a proposal for presentation at the BlogTalk 2009 conference. Possible areas include, but are not limited to:

  • Forms and consequences of emerging social software practices
  • Social software in enterprise and educational environments
  • The political impact of social software and social media
  • Applications, prototypes, concepts and standards

Participants and proposal categories

Due to the interdisciplinary nature of the conference, audiences will come from different fields of practice and will have different professional backgrounds. We strongly encourage proposals to bridge these cultural differences and to be understandable for all groups alike. Along those lines, we will offer three different submission categories:

  • Academic
  • Developer
  • Practitioner

For academics, BlogTalk is an ideal conference for presenting and exchanging research work from current and future social software projects at an international level. For developers, the conference is a great opportunity to fly ideas, visions and prototypes in front of a distinguished audience of peers, to discuss, to link-up and to learn (developers may choose to give a practical demonstration rather than a formal presentation if they so wish). For practitioners, this is a venue to discuss use cases for social software and social media, and to report on any results you may have with like-minded individuals.

Submitting your proposals

You must submit a one-page abstract of the work you intend to present for review purposes (not to exceed 600 words). Please upload your submission along with some personal information using the EasyChair conference area for BlogTalk 2009. You will receive a confirmation of the arrival of your submission immediately. The submission deadline is June 27th, 2009.

Following notification of acceptance, you will be invited to submit a short or long paper (four or eight pages respectively) for the conference proceedings. BlogTalk is a peer-reviewed conference.

Timeline and important dates

  • One-page abstract submission deadline: June 27th, 2009
  • Notification of acceptance or rejection: July 13th, 2009
  • Full paper submission deadline: August 27th, 2009

(Due to the tight schedule we expect that there will be no deadline extension. As with previous BlogTalk conferences, we will work hard to endow a fund for supporting travel costs. As soon as we review all of the papers we will be able to announce more details.)

Topics

Application Portability
Bookmarking
Business
Categorisation
Collaboration
Content Sharing
Data Acquisition
Data Mining
Data Portability
Digital Rights
Education
Enterprise
Ethnography
Folksonomies and Tagging
Human Computer Interaction
Identity
Microblogging
Mobile
Multimedia
Podcasting
Politics
Portals
Psychology
Recommender Systems
RSS and Syndication
Search
Semantic Web
Social Media
Social Networks
Social Software
Transparency and Openness
Trend Analysis
Trust and Reputation
Virtual Worlds
Web 2.0
Weblogs
Wikis
Reblog this post [with Zemanta]
Advertisements

At the blognation Japan launch party last week

Last Friday, I attended the blognation Japan launch party (organised by editor Robert Sanzalone) at the Outback Steakhouse in Tokyo along with Eyal and Armin.

20071123a.png

I really enjoyed the evening, had a great chat with Rob Cawte (see his report here) about the Semantic Web and community wikis, and also talked to John Foster, Yusuke Kawasaki, Andrew Shuttleworth, Robert, and some others whose names have either escaped me or whose business cards I did not get.

You can read more at blognation Japan.

"Made in Japan: What Makes Manga Japanese? And Why Western Kids Love It"

Since I’m interested in manga through running the boards.jp / Manga to Anime site, I found out about a talk when I was in Tokyo last week entitled “Made in Japan: What Makes Manga Japanese? And Why Western Kids Love It”.

It was held by the Society of Children’s Book Writers and Illustrators in Japan, and featured Roland Kelts (photo), author of “Japanamerica“, and Masakazu Kubo (drawing Pikachu here), an executive of Shogakukan and producer of the Pokémon movie series. The talk covered “the nuts and bolts of the craft of manga and […] the nature of its appeal beyond Japan”, and was followed by a Q&A session.

The speeches were pretty interesting. Kelts started off by giving an overview of the history of manga, ranging from the 40s and 50s art of Osamu Tezuka to its current penetration of American bookstores. He then turned over to Kubo-san for some industry perspective, including details of how a week’s worth of manga used to correspond to just 15 minutes on screen, and the fact that anime has permeated other countries in some part because it is easier (and hence cheaper) to dub in comparison to other animation (it has less precise movements of the mouth).

I asked the speakers if something like Brewster Kahle’s book archiving / book mobile project (which I blogged about last week; see video here) would have relevance to the world of manga, since Kubo-san mentioned that a lot of manga is now being digitised. Kubo said that since there are various upload / download legalities with respect to currently-licensed manga, this would be difficult, but that anything that fell outside the (previously) 50-year copyright span could potentially be provided in such a manner.

I enjoyed the session, and even found a picture of the back of my head and boards.jp t-shirt on the Japanamerica blog! My own photos are here.

Web 2.0 Expo Tokyo: Eric Klinker – “Web 2.0 and content delivery”

My last report from the Web 2.0 Expo Tokyo event is about the talk by Eric Klinker, chief technical officer for BitTorrent Inc. (I met Eric and his colleague Vincent Shortino briefly on Thursday evening), who gave a talk about “the power of participation”.

The market for IP video is huge, and a Cisco report called the “Exabyte Era” shows that P2P, which currently accounts for 1014 PB of traffic each month, will continue to rise with a 35% year-over-year growth rate. User-contributed computing is happening right now, and is delivering over half of the Internet traffic today.

A new order of magnitude has arrived, the exabyte (EB). One exabyte is 2^60 bytes, which is 1 billion gigabytes. If you wanted to build a website that would deliver 1 EB per month, you would need to be able to transfer at a rate of 3.5 TB/s (assuming 100% network utilisation). 1 EB corresponds to 3,507,000 months or 292,000 years of online TV (stream encoded at 1 MB/s), 64,944 months or 5,412 years of Blu-ray DVD (maximum standard 54 MB/s), 351 months or 29 years of online radio traffic, 20 months or 1.7 years of YouTube traffic, and just one month of P2P traffic.

If you have a central service and want to deliver 1 EB, you would need about 6.5 MB/s peak bandwidth, and 70,000 servers requiring about 60-70 megawatts in total. At a price of $20 per MB/s, it would cost about $130 million to run per month!

The “Web 2.0” way is to use peers to deliver that exabyte. However, not every business is ready to be governed by their userbase entirely. There is an opportunity to take a hybrid model approach. BitTorrent are a content-delivery network that can enable Internet-based businesses to use “the power of participation”. 55 major studios and 10,000 titles are now available via BitTorrent.com (using BitTorrent DNA). Also, the BitTorrent SDK allows BT capability to be added to any consumer electronic device.

He then talked about the Web 2.0 nature of distributed computing, and how we can power something that wouldn’t or couldn’t be powered otherwise. For example, Electric Sheep is a distributed computing application that renders a single frame on your machine for a 30-second long screensaver, which you can then use. Social networks also have a lot of machines, but the best example of distributed computing is search. Google has an estimated 500k to 1M servers, corresponding to $4.5B in cumulative capex (that’s capital expenditure to you and me) or 21% of their Q2 net earnings (according to Morgan Stanley). And yet, search is still not a great experience today, since you still have a hard time finding what you want. Search engines aren’t contextual, they doesn’t see the whole Internet (the “dark web”), they aren’t particularly well personalised or localised, and they aren’t dynamic enough (i.e, they cannot keep up with most Web 2.0 applications [although I’ve noticed that Google is reflecting new posts from my blog quite quickly]).

The best applications involve user participation, with users contributing to all aspects of the application (including infrastructure). Developers need to consider how users can do this (through contributed content, code or computing power). As Eric said, “harness the power of participation, and multiply your ability to deliver a rich and powerful application.”

Web 2.0 Expo Tokyo: Håkon Wium Lie – “The best Web 2.0 experience on any device”

There was a talk at the Web 2.0 Expo Tokyo last Friday afternoon by Håkon Wium Lie, chief technical officer with Opera Software. He has been working on the Web since the early nineties, and is well known for his foundational work on CSS. Opera is headquartered (and Håkon is based) in Norway.

Håkon (pronounced “how come”) started by talking about the Opera browser. Opera has browsers for the desktop, for mobiles and for other devices (e.g., the Nintendo Wii and the OLPC $100 laptop). He thinks that the OLPC machine will be very important (he also brought one along to show us, pictured), and that the browser will be the most important application on this device.

Another product that Opera are very proud of is Opera Mini, which is a small (100k) Java-based browser. Processing of pages takes place via proxy on a fixed network machine, and then a compressed page is sent to Opera Mini.

He then talked about new media types on the Web. Håkon said that video needs to be made into a “first-class citizen” on the Web. At the moment, it takes a lot of “black magic” and third-party plugins and object tags before you can get video to work in the browser for users. There are two problems that need to be solved. In relation to the first problem – how videos are represented in markup – Opera proposed that the <video> element be added to the HTML5 specification. The second problem is in relation to a common video format. The Web needs a baseline format that is based on an open standard. Håkon stated that there is a good candidate in Ogg Theora, which is free of licensing fees, and in HTML5 there may be a soft requirement or recommendation to use this format. He showed some nice mockups of Wikipedia pages with embedded Ogg videos. You can also combine SVG effects (overlays, reflections, filters, etc.) with these video elements.

He then talked about the HTML5 specification: the WHAT working group was setup in 2004 to maintain HTML, and a W3C HTML working group was also established earlier this year. HTML5 will include new parsing rules, new media elements, some semantic elements (section, article, nav, aside), and also some presentational elements will be removed (center, font).

Håkon next described how CSS is also evolving. As an example, he showed us some nice screenshots from the css Zen Garden, which takes a boring document and asks people to apply their stylesheets to change the look. Most of them use some background images to stylize the document (rather than changing the fonts dramatically).

CSS has a number of properties to handle fonts and text on the Web. Browsers have around ten fonts that can be viewed on most platforms (i.e., Microsoft’s core free fonts). But there are a lot more fonts out there, for example, there are 2500 font families available on Font Freak. Håkon says that he wants to see more browsers being able to easily point to and use these interesting fonts. In CSS2, you can import a library of fonts, and he reiterated his hope that fonts residing on the Web will be used more in the future.

Another use for CSS3 is in professional printing. Using the Prince tool, Håkon has co-written a book on CSS using CSS3. CSS3 can allow printing requirements to be specified such as multiple columns, footnotes, leaders, etc.

He then talked about the Acid2 test. Acid2 consists of a single web page, and if a browser renders it correctly, it should show a smiley face. Every element is positioned by some CSS or HTML code with some PNGs. Unfortunately, Internet Explorer performs worst in this test. But I also tested out Firefox 2 and got something distorted that looked like this.

The last thing he talked about was 3D. He gave a nice demo of Opera with some JavaScript that interfaces with the OpenGL engine to render a PNG onto a cube and rotates it. He also showed a 3D snake game from Opera (only a hundred or two lines of code), which is available at labs.opera.com.

I really enjoyed the forward-looking nature of Håkon’s presentation, and said hello briefly afterwards to say thanks for Opera Software’s (via Chaals and Kjetil) involvement in our recent SIOC member submission to the W3C.

Web 2.0 Expo Tokyo: Joe Keller – “Understanding and applying the value of enterprise mashups to your business”

(Another delayed report from a talk last Friday at the Web 2.0 Expo.)

Joe Keller is the marketing officer with Kapow, so I was expecting a marketing talk, but there was a nice amount of technical content to keep most happy. Joe was talking about “getting business value from enterprise mashups”. Kapow started off life as a real-estate marketplace in Europe ten years ago, but moved towards its current focus of mashups after 2002. Referencing Rod Smith, whom I saw last year at BlogTalk 2006, mashups allow content to be generated from a combination of rich interactive applications, do-it-yourself applications plus the current scripting renaissance.

According to McKinsey, productivity gains through task automation have peaked, and the next productivity wave will be data-oriented as opposed to task-oriented. Joe says that Web 2.0 technologies are a key to unlocking this productivity. He also talked about two project types: systematic projects are for conservative reliability, whereas opportunistic projects (or “situational applications” to use the IBM terminology) are for competitive agility. Mashups fit into the latter area.

The term mashup can apply to composite applications, gadgets, management dashboards, ad hoc reporting, spreadsheets, data migration, social software and content aggregation. The components of a mashup are the presentation layer, logic layer, and the data layer (access to fundamental or value-added data). In this space, companies are either operating as mashup builders or mashup infrastructure players like Kapow.

The main value of mashups is in combining data. For example, HousingMaps, the mashup of Google Maps and data from Craig’s List, was one of the first interesting mashups. The challenge is that mashups are normally applied to everyone’s data, but if you’re looking for a house, you may want to filter by things like school district ratings, fault lines, places of worship, or even by proximity to members of your LinkedIn / MySpace network, etc.

He then listed some classes of mashup data sources. In fundamental data, there’s structured data, standard feeds, data that can be subscribed to, basically stuff that’s open to everyone. The value-added data is more niche: unstructured data, individualised data, vertical data, etc. The appetite for data collection is growing, especially around the area of automation to help organisations with this task. The amount of user-generated content (UGC) available is a goldmine of information for companies, enabling them to create more meaningful time series that can be mashed up quickly into applications. According to ProgrammableWeb, there are now something like 400 to 500 mashup APIs available, but there are 140 million websites according to NetCraft, so there is a mismatch in terms of the number of services available to sites.

Kapow aims to turn data into business value, “the right data to the right people at the right time.” Their reputation management application allows companies to find out what is being said about a particular company through blogs, for sentiment analysis. They also provide services for competitive intelligence, i.e., how do you understand the pricing of your competitors in an automated fashion. Asymmetric intelligence is another service they provide for when people are looking for a single piece of information that one person has and no-one else possesses. Business automation is where mashups are being used to automate internal processes, e.g., to counteract the time wasted by “swivel-chair integration” where someone is moving from one browser on one computer to another and back again to do something manually. Finally, opportunistic applications include efforts whereby companies are aiming to make users part of their IT “team”, i.e., by allowing users to have access to data and bringing this into business processes: Web 2.0 infrastructure allows companies to use collective wisdom using Kapow technologies.

About RSS, Joe said that almost every executive in every corporation is starting to mandate what feeds he wants his company to provide (and RSS feeds are growing as quickly as user-generated content in blogs, wikis, etc.). Kapow’s applications allows you to create custom RSS feeds, but he gave a short demo of using Kapow to build an on-the-fly REST service. His service produced the quote for a company’s stock price by extracting identified content from an area of a web page, which could then be incorporated into other applications like an Excel spreadsheet. I asked Joe if it is difficult to educate end users about REST and RSS. He pointed to the ease with which most people can add feeds to iGoogle and said that its becoming easier to explain this stuff to people.

Kapow’s server family allow portal creation, data collection (internal and external), and content migration via mashups which Joe reckons are often more useful than static migration scripts since they can be customised and controlled. Kapow also provide a free “openkapow” API and site for developers to share how they build mashups and feeds.

In summary, Joe gave these take aways:

  • The next business productivity wave will be via data and know-how automation, not routine task automation.
  • Knowledge workers need self-service mashup technology to take advantage of this.
  • Access to critical (value-added) data can create a competitive edge.
  • Web 2.0 technologies complement existing IT systems to maintain the competitive edge.

Web 2.0 Expo Tokyo: Scott Dietzen – “The impact of ’software as a service’ and Web 2.0 on the software industry”

Scott Dietzen, president and CTO of Zimbra, gave the third talk last Friday at Web 2.0 Expo Tokyo. Zimbra has been on the go for four years (so they are Web 2.0 pioneers), and embarrassingly I told Scott that I only found out about them very recently (sorry!). Scott’s aim for this talk was to share the experience of having one of the largest AJAX-based web applications (thousands of lines of JavaScript code). Since their status has changed since they originally signed up for the conference, he mentioned that Yahoo! are the new owners of Zimbra. But Scott affirmed that Zimbra will remain open source and committed to their partners and customers who have brought Zimbra to where it is.

Web 1.0 started off for consumers, but began to change the ways in which businesses used technology. A handful of technologies allowed us to create a decade of exciting innovations. With Web 2.0, all of us have become participants, often without realising the part we play on the Web – clicking on a search result, uploading a video or social network page – all of this contributes to and changes this Web 2.0 infrastructure. This has enabled phenomenons like Yahoo! Flickr and open source, where a small group of people get together, put a basic model forward, and then let it loose. As a result of many contributions from around the world, we now get these phenomena. There are 11,000 participants in the Zimbra open source community, vastly more than the personpower Zimbra or Yahoo! could put into the project.

Mashups may be the single best thing for Zimbra. AJAX has won over much of the Internet because websites have voted with their feet, and according to Scott “it actually works”. Scott was formerly part of the WebLogic team, and one of that team said recently that there was a special place in heaven for whoever in Zimbra had the patience to get all of that JavaScript programming working properly. There are currently 50 or 60 AJAX development toolkits, but Scott hopes that the industry can rally around a smaller number, especially open-source technologies which offer long-term portability across all the leading platforms.

Another issue is that browsers weren’t initially designed for this kind of “punishment”, so it’s taken time for browsers to become solid productive AJAX containers. They can still do better, and Scott said he is excited to see the emergence of JIT-compilation technology that will allow browsers to operate at least two to three times faster.

With Zimbra, there is no caching of user data within the client. So in a public kiosk, there will be no security leaks under AJAX. The challenge is that the source code is now available to anyone with a web browser. It is crucial to protect against ever executing any JS code that is external to your application. For the first time, we have a universal user interface that is connected to and allows one to mix and match various UIs together: Scott reckons we’ve only just begun to touch the surface of what can be done with mashups.

There are four techniques to speeding up AJAX applications. Firstly, combine stuff together where possible. Then, compress the pages to shrink the required bandwidth for smaller pipes. Next is caching, to avoid having to re-get the JS and re-interpret it (in Zimbra, they include dates for when the JS files were last updated). The last and best technique is “lazy loading”. Zimbra is a very large JS application in one page. By breaking it up into several modules that can be loaded on demand, one can reduce the time from when you can first see and start using the application.

Offline AJAX is a fundamental change but offers many opportunities. You can have the web experience when on a flight or when far away from the data centre that you normally use. Zimbra is faster to use as an offline version while synchronising back to California, rather than having to wait for every operation to cross the ocean and back again. For Zimbra, they took the Java server code and produced a micro version to install on the local desktop. This allows one to preserve all the “smarts”, and make them available to desktop users. Offline isn’t for everything, for example, when data is so secure that it shouldn’t be cached on a PC, or if old data gets stale and no longer makes sense, etc. Also, you have to solve synchronisation issues. You can be changing your mailbox while on a plane, but new mail is arriving in the meantime and some reconciliation has to take place. And there is also the problem of desktop apps in general: once you have code, how do you upgrade it, etc.

In Web 1.0, UI logic, business logic, and data logic were all supposed to be separated. They could fix some (X)HTML and SQL schemas to aid with this, but in practice people didn’t modularise. In Web 2.0, there is an effort to have clearer separtions (due to mashups, feeds, etc.) between client / UI logic, server / business logic, and data logic. It’s better to modularise, and getting people to move towards a more modular architecture will allow you to get more value from your business applications, and will also allow you to “play in this Web 2.0 world”. In relation to SOA, Scott said that we are perhaps moving from something like ISO, where there’s one big document with a 10,000 page specification, to something almost as problematic, where there are one page specs for 10,000 or more web services. There is a well known theory that you can’t start with something complex and expeect everyone to suddenly start using it.

He then focused on software as a service, or SAAS. SAAS is inherent in the Web, since the service is off somewere else when you open up a browser page. He also talked about the opportunities when businesses are put together in the same data centres. This results in multi-tenancy, and the ability or need to set up a single server farm with tens of thousands of companies’ data all intermixed in a secure way without compromising each other. There is a need to think about how to manage so many users together at once. This is usually achieved through a common class of service for all these users as a whole. Administation should be delegated, to some extent, an important aspect to get right. You may also may want to allow users to customise and extend their portion of the applications they are using if appropriate.

Scott next talked about convergence. E-mail has made great progress in becoming part of the web experience (Hotmail, GMail, Yahoo! Mail, etc.). The same thing is now happening to IM, to VoIP, to calendars, etc. For example, a presence indicator next to an e-mail inbox shows if each user is available for an IM or a phone call. Or the reverse: someone tries to call or IM you, but you can push back and say that you just want them to e-mail you because you’re not available right now. Being able to prioritise communications based on who your boss is, who your friends are, etc., is a crucial aspect of harnessing the power of these technologies. On voice, we want to be able to see our call logs, to use these to dial our phone, e.g., you want to just click on a person and call that person. You may also want to forward segments from that voice call over e-mail or IM.

In Japan, they have had compelling applications for mobile devices for a lot longer than the rest of the world. Scott showed a demonstration of the Zimbra experience on an iPhone. Scott finished by saying that everything about these new technologies has to be used right to make someone’s life better, and to make usage more compelling. Innovation, or how deeply we can think about how the future ought to look like, is very important.

Seiji Sato from Sumitomo, whose subsidiary Presidio STX invested in Zimbra last year, then spoke. He started by mentioning that over 100 corporations are now using Zimbra. Sumimoto hopes to contribute to “synergy effects” in Japan’s Web 2.0 community by mashing up services from various businesses and by providing the possibility to extend and utilise available company competencies.

To expand Zimbra’s usage in Japan, Sumitomo have been working with their associate company FeedPath and other Web 2.0 business, providing Zimbra localisation and organising a support structure both for this market and for early adopters. Sato said that although Sumitomo are not a vendor or manufacturer, they feel that the expansion of Web 2.0 is quite viable and very important.

After the talk I asked Scott if Zimbra would be looking at leveraging any widgets that will be developed under the much-hyped OpenSocial initiative within the Zimbra platform, since it seemed to me that there is a natural fit between the implicit social networking information being created within Zimbra and the various widgets that are starting to appear (and I guess since they are just in XHTML / JS, there’s a technology match at least). Scott told me that Zimbra already has around 150 plugins, and that the ability to layer mashups on top of this implicit social network information is certainly very important to them. He was unsure if OpenSocial widgets would fit to Zimbra since their e-mail platform is quite different from SNS platforms, but he did say [theoretically, I should add, as there are no plans to do so] that if such widgets were ported to work with Zimbra, they would probably require extensive testing and centralised approval rather than just letting anybody use whatever OpenSocial widget they wanted to within Zimbra.