The Pulse 1

Back to Contents of Issue: January 2004


The Word on the Street from the Heart of Tokyo

by The Editors / Mark McCracken

Ashikaga Implodes

Even the biggest optimist had to admit that it was all going a little too well to be credible. The Japanese banking system, though allegedly on the mend, is still the kind of patient that can suffer a sudden relapse and put the entire industry back on the critical list.

With the end of the year in sight, Ashikaga brought Japan's banking collapse total to two. There were plenty of similarities between the nationalization of Ashikaga and the partial state seizure of Resona back in May. But it's the few critical differences that worry us.

First of all, there was the apparent lying and deceit -- all the more disturbing because it was carried out by at least three central government institutions: the Prime Minister's Office, the Financial Services Agency (FSA) and the Cabinet Office. When the possibility of an Ashikaga bailout first broke, all three of those institutions brazenly denied that a bailout was even "under consideration."

That cannot possibly have been true.

The bailout of a bank is not an overnight decision, even in Japan. Yet the denials continued until just a few days before the JPY1 trillion bailout.

Then there are the circumstances of the announcement itself: at 10 pm on a Saturday night in the middle of a weekend that just happened to be a major public holiday in the US. Remember that the Resona announcement was also made during a Japanese national holiday weekend. Sunday newspapers were able to get the story into their last editions, and market players sitting at home over the weekend had a chance to let the news settle in.

As with the Resona bailout, the FSA and the Bank of Japan (BOJ) were at pains to limit the fallout of the collapse. Deposits were all guaranteed, the incompetent board of Ashikaga was sacked and replaced with an FSA team, the BOJ made sure there was enough cash liquidity to withstand a bit of panic and local companies in Tochigi (where Ashikaga is the dominant financial institution) were treated to some sage advice on how to save their businesses.

As with Resona, the Ashikaga measures are very likely to provide the sort of makeshift bandages that the overall banking system needs to continue limping on. But the Ashikaga mess has revealed considerable discrepancies between what the management was saying about the state of its bad loan problems and the reality of its predicament. Other banks may not be in quite such dire straits as Ashikaga (the Bank of Japan had long warned that there were problems lurking there), but many of them cannot be far behind.

The next biggest concern is therefore how the government is going to treat each case, and it is here that the Ashikaga experience sends the most worrying message. The nationalization of Ashikaga, unlike that of Resona, left shareholders in the bank with absolutely nothing. The shares were seized, not bought. And the government offered no concessions.

-- The Editors

ATR Innovates

Wish your mobile phone could also translate Japanese to English? The Advanced Telecommunications Research Institute (ATR) in Nara continues to crank out new innovations. The research center recently went through some alterations resulting from changes in the way it receives its funding. Although the Japanese Key Technology Center had supplied funds for the first 15 years since its foundation, ATR is now conducting commissioned research from competition-based research funds. One thing that hasn't changed, however, is the steady supply of cutting-edge research and technological innovation, including, possibly, a translating mobile phone.

The Spoken Language Translation Research Laboratories at ATR recently demonstrated a system that shows a lot of promise. Two people, one Japanese and one native English speaker, each carried a PDA connected to earphones and a microphone. Each PDA used a wireless LAN card. The two speakers simulated a foreigner checking into a hotel. Each sentence that each speaker spoke was automatically converted into text form, translated and read aloud to the other person in the opposite language. The PDA also displayed a text record of the conversation.

There are still a few areas that need to be improved. First, getting the entire system out of a PDA/earphone/microphone set-up and into a mobile phone would greatly improve usability. Second, there is a slight delay in the translation time, which seems to vary based on the difficulty of the translation. In the recent demonstration, wait times averaged about 4 seconds for Japanese to English translations and 6 seconds for English to Japanese translations.

The size of the back-end support is very large. In order to demonstrate a tourist-based conversation within a limited setting, the tech staff had to string together six high-end PCs, a server and a database of 23,000 words. In order to have a system that could translate nearly any conversation, the database would have to be around 300,000 words. (The group is also presently working on a similar Japanese to Chinese translation system.)

It's very easy to see the market potential of a mobile phone translation system. Once the system worked effectively on a national basis, hypothetically two people could punch in a code on their mobile phones and communicate effectively across languages, regardless of whether they were in the same physical location or not.

The ATR is taking more than one approach in getting computers and robots to speak directly to you. The Biological Speech Science Project in the Human Sciences Information Laboratories has created what might be called a plastic singing vocal chamber. They first took a Magnetic Resonance Image (MRI) of a person singing a vowel -- "ahhhhhhhhh," for example. From the MRI, they created a stiff three-dimensional hollow plastic model of the mouth and throat. They hooked the model up to a frequency-vibration machine and created a sound similar to the one made by the original human. ("Similar" being defined here by a blue and a red line having similar patterns on a graph that goes far beyond the understanding of your Average Joe.)

Suggestions that they might next try to use pliable plastic connected to small motors to create human-like speech are shot down with a quick and pragmatic recitation of the project's goal -- to find the source of "individuality" in the human voice. Both the shape of the vocal tract and the geometry of the lower pharynx contribute to the sound.

ATR's Media Information Science Research Laboratories' Senseweb could easily be mistaken for the Internet's equivalent of the Lava Lamp. Mention a word into the headset and a widescreen TV shows images gathered from Web sites that seem to bubble up out of the center of the screen. Touch the screen with your hands and you can move the images around, toss aside images that bore you and open those that interest you. Using the Senseweb is a bit like playing Tom Cruise in the movie Minority Report, only using two dimensions instead of three. The idea is to allow users not only to access a large amount of data intuitively, but also to have fun in controlling the flow and presentation of the data. It was designed in anticipation of future applications for entertainment, edu-tainment and art that will call for more playful and intuitive interactions.

Another project the Media Information Science Lab is working on is the collaborative capturing of interactions by multiple sensors. Ever wonder what would happen if a bunch of people each strapped on a head-mounted camera, a headset microphone, physiological sensors and a small personal computer, then walked into a room filled with stationary video cameras and microphones and looked at objects with LED sensors attached? No? Well, these folks have. The less than obvious aim here is to understand both verbal and non-verbal human interaction mechanisms, and have those interactions recognized by a computer.

ATR's Human Information Processing Research Laboratory has helped in the development of a wireless tongue pressure sensing system that allows users to maneuver electric equipment using only the tips of their tongues.

For example, a quadriplegic person could ideally be able to control the movement of a wheelchair. The sensors in the mouth unit would control direction, and the magnitude of pressure on the sensors determines the wheelchair's speed of movement.

The system uses an onboard FM radio wave receiver and microprocessor to create drive signals to the wheelchair. Possible future applications of the technology include remotely controlling an electric bed, television, air conditioner, telephone and personal computer.

-- Mark McCracken

Note: The function "email this page" is currently not supported for this page.