Harry Robinson
Sankar Chakrabarti
Quality software is the result of many teams bringing different skills to bear on a problem. Nowhere is this fact more obvious than when creating software that runs in many different languages and locales. Having software that can execute in English, French, German, Japanese and a host of other languages is an advantage in the marketplace: the software can reach a larger market because users feel comfortable running the application in their native languages. But testing localized software poses a challenge to quality assurance. One significant problem is that people testing an application in a language foreign to them may not know whether what they are looking at is correct or not. The people who could determine if the software is correct may be on the other side of the world, unable to run the software themselves.
Our solution to this problem uses the facilities of the World Wide Web to bridge the gap between those able to run the tests and those able to determine if the results are correct.
Most software applications that wish to sell in the international market get localized. Localization is the process that allows menus and messages in an application to appear in users' native languages. Because people are most comfortable using software when the user interface is in a familiar language, localization is an important part of selling software in non-English-speaking nations. It is common for an application to be localized into a dozen languages.
Software in our lab is developed originally with English user messages. The UNIX mechanism for localizing software is to store all user message strings in a file called a message catalog. A copy of this message catalog is then sent to a language expert, known as a localizer, who translates the English message strings into the target language, such as French or Japanese. The localizer then returns the translated message catalog to our lab. When the application is run with the translated message catalog in place, all user menus and messages should appear in the target language.
Developing localized software poses some unique challenges. The development team and localizers of an application may never actually meet each other. There are often several different localizers working on an application and they may be located anywhere in the world.
Being scattered around the world makes it difficult for localizers and development teams to work together. For instance, when the localizers receive a message catalog, they may have little familiarity with the software application. This lack of familiarity can lead to translation errors; words such as "File" may be translated different ways, depending on whether it is used as a noun or a verb.
Likewise, the testers who are exercising the application are unlikely to be skilled in the dozen languages into which the application has been localized. This means that they cannot be certain that the application is running correctly in these locales.
As an example of the type of subtle bug that can be introduced in localized software, look at Figures 1 and 2 below.
Figure 1: Valid ImageView Menu Bar
Figure 2: Corrupted ImageView Menu Bar
Figure 1 shows the correct menu bar for a Japanese locale version of the ImageView application. The word that precedes the "(V)" on the menu bar is transliterated as "hyoji" and means "View". Figure 2, on the other hand, contains meaningless, corrupted characters (circled). Would a tester who does not read Japanese be able to recognize that the application in Figure 2 is incorrect? More likely, the tester would see Japanese-looking characters and assume that the menu bar items are correct.
One of the chief difficulties in localizing software is the lack of opportunity for collaboration between the development team and the localizers. The localizers have the language expertise, but they do not have access to the software to run the application. The development team can run the application, but lacks the language expertise to detect subtle localization problems. These difficulties have traditionally been addressed in one of two ways.
The first method is to provide an early version of the software to the localizers so that they can test the localized version. This approach has at least two drawbacks. First, the localizers may not have the necessary equipment or computer knowledge to install and run the application. Second, the localizers are not experts in testing the application, so there is no guarantee that they will exercise the software thoroughly.
A second common method is to bring localizers on-site to the development lab where they can try out the localized software. This approach has the benefit that the localizers can work closely with the development team to run the application and give feedback. However, this approach is expensive because it involves providing travel and lodging for the localizers. As the number of localizations grows, this cost becomes prohibitive. Also, if the localizers are brought on-site early in the process, there is a risk that they will not catch errors introduced later in the process. If, on the other hand, localizers do the testing shortly before release, it will be too late in the development cycle to make any significant changes.
Our strategy, which we call Traveler Tours uses the World Wide Web to bridge the gap between those who have the ability to run the tests and those who can determine if the output is correct. The metaphor we use is that of a traveler who visits various locations and then returns home with snapshots from the trip. People interested in seeing what a distant land looks like do not have to travel there themselves; instead, they can examine the snapshots of others who have been there.
Just as looking through a photo album is simpler and cheaper than traveling to distant lands oneself, a Traveler Tour provides a simple way for many people in different groups and even different parts of the world to see the output of test runs of localized applications. In a Traveler Tour, an automated test program called Traveler exercises the application in all available locales, taking screen captures of each application window as it goes. These screen captures are automatically loaded onto web pages by a script called the Webifier. Interested parties can view the application in various locales by browsing the web pages, they do not need to run the application themselves.
The Traveler is a simple test program written in C using Synlib [1] test libraries. The Traveler program is locale-independent and can set the locale of the system under test. (Methods for writing locale-independent tests and tests that can change locales are explained at length in [2].)
Because the Traveler is merely driving the application and is not verifying behavior itself, it works according to a simple template inside any locale:
Drive the application to the next screen of interest.
Capture the screen image.
Save the image to a file.
Go to step 1.
The name of the saved GIF image file contains
the application name,
the locale being used and
the sequence number of the image within that locale
For example, the eighth captured screen image of the DT_PAD application in the Japanese locale (ja_JP.SJIS) would be saved in a file named "DT_PAD.ja_JP.SJIS.08.gif".
To provide some context for those who will view this image in the future, the Traveler program also records a caption for each image. This caption is simply a modified version of the Synlib FocusPath [1] used to reach this screen in the application. In the case of a screen displaying the Print Help window of the DT_PAD application, the caption is "DT_PAD->File->Print->Help". Since it is generated from the FocusPath, this caption is the same for all locale versions of this image.
The Webifier is a general purpose script that creates and organizes web pages based on the file naming convention mentioned above.
Once Traveler has captured and saved screen images in all desired locales, the Webifier creates HTML pages that allow visitors to navigate through the captured images. The Webifier constructs an HTML page for each captured screen image and its associated caption. The Webifier provides links from each image to the images before and after it in its tour and to corresponding images in other locales.
After all the image pages for a tour have been constructed, the Webifier creates an overview page showing thumbnail images of all screen captures in all locales. Each thumbnail image is a link to the full-size image page.
From any image page, one can
move through the sequence of images in a locale,
look at the same screen in different locales, or
go up to a thumbnail overview of all the images in a tour.
Figure 3 shows the web page generated for screen capture number 8 in the C locale tour of the DT_PAD application.
Clicking on the up arrow takes you to the previous image in the C locale
tour.
Clicking on the down arrow takes you to the next image in the C locale tour.
Clicking on the button between the up and down arrows takes you to the Thumbnail Overview of the DT_PAD tour (see Figure 5).
Clicking on the pull-down menu at the upper right allows you to see the corresponding image in another locale, such as Japanese (see Figure 4).
Figure 3: Image #8 of DT_PAD tour in the default C locale
Figure 4 shows the web page generated for screen capture number 8 in the
Japanese locale tour of the DT_PAD application.
Clicking on the up arrow takes you to the previous image in the Japanese locale tour.
Clicking on the down arrow takes you to the next image in the Japanese locale tour.
Clicking on the button between the up and down arrows takes you to the Thumbnail Overview of the DT_PAD tour (see Figure 5).
Clicking on the pull-down menu at the upper right allows you to see the corresponding image in another locale, such as the default C locale (see Figure 3).
Figure 4: Image #8 of DT_PAD tour in a Japanese locale
Figure 5 shows the DT_PAD Thumbnail Overview page. This page contains thumbnail images of each screen captured in the DT_PAD tour. The images are organized into columns by locale. Each row contains corresponding screen captures in the different locales. The caption for each screen capture appears alongside the thumbnails. Clicking on any image takes you to the full-size image page.
Figure 5: Thumbnail Overview of DT_PAD tour
Different groups use the tours according to their needs. Testers are typically the first users. After the Traveler and Webifier have run, the tester can browse the thumbnail overview page to see if there are any images that are missing. Missing images could indicate there was a problem running the Traveler program. By comparing the thumbnail images of a particular screen across several locales, the tester can also quickly see any gross differences between the locales.
If the thumbnail overview seems acceptable, the tester can then go through the tours of the full-size images in the various locales. Although the tester might not be familiar with the languages, it is useful to compare foreign locale snapshots with snapshots in the default C locale.
After the tester has done a preliminary verification of the snapshots, the language experts and localizers from around the world can tour the web pages from the comfort of their own offices.
When problems are detected in the localized application, it is simple to point out the offending image, e.g. as "DT_PAD image 12 in the ja_JP.SJIS locale". Developers and testers responding to the problem do not need to first re-create the scenario to understand the problem; they can simply point a browser at the page in question.
When bugs are found and fixed, the Traveler program is run again and the Webifier constructs a new tour. Because the Traveler and Webifier programs are automated, generating new tours involves very little overhead.
One surprising use of the Traveler Tour came after several localizers
first saw a tour. As noted earlier, translation of the message catalogs is
difficult when the localizers have never seen the application run; they do not
have a context for the strings to be translated. The localizers pointed out that
having a Traveler Tour of the C locale available before they start their own
translations would make the translation easier.
The Traveler Tours improved our localization efforts immediately. Before localized versions of the applications were available outside our lab, our localizers were able to point out localization problems such as missing translations, corrupted text and incorrect sizing of windows.
Each Traveler Tour can contain dozens of screen images in several locales. These GIF images average 15 kilobytes in size. Storing and distributing those images can be a burden on memory and bandwidth.
Corporate firewalls can make it difficult to share web-based information with people outside the company. In some cases, we transmitted the web pages to localizers to view on their own machines.
The Traveler Tours are not a substitute for actual hands-on testing. For instance, if a locale is experiencing performance problem, it would not be detected by someone looking at the web page images.
Creating high quality localized software poses unique challenges: almost no single person can verify that the application works correctly in all supported locales. And the people who have the necessary software and language expertise are often spread halfway around the world making collaboration difficult.
The Traveler Tours allow us to solve some of these difficulties through an integrated use of automated testing and the World Wide Web. Although originally conceived solely as a testing tool, the Traveler Tours have been helpful to
localizers who want to become familiar with an application before they begin translating;
developers who want to see how the application behaves in various locales;
overseas partners who want to preview the localizations; and
managers who want to check on the status of the localization effort.
By bridging the gaps between the different groups involved in localization, the Traveler Tours have greatly simplified and improved our development of localized software.
The authors would like to thank Ken Bronstein, Arne Thormodsen and Dan Williams of Hewlett Packard for their help in developing the Traveler Tour ideas.
[1] Sankar L. Chakrabarti: Testing X-Clients Using Synlib and FocusMaps. The X Resource, Issue 13, pp. 7-21, Winter Issue, 1995
[2] Harry Robinson and Sankar L. Chakrabarti: Testing CDE In Sixty Languages: One Test Is All It Takes," Proceedings of the 14th International Conference on Testing Computer Software, 1997
Harry Robinson is currently a software test engineer with Microsoft's Intelligent Search Group
Sankar Chakrabarti is an R&D engineer at Hewlett Packard