My home on the web - featuring my real-life persona!

Fun with character encodings, Greek ANSI

Now that I got the Java encodings half way under control, I encountered “Fun with character encodings” again. This time, it’s a Greek tragedy.

A couple of days ago I received a small text file with English strings. The strings are messages for a service pack and they are needed to be translated in Greek. Unfortunately, that wasn’t all, the text file needs to be in ANSI format because the installer InnoSetup requires that format. Hmm, I immediately thought that smells like trouble because most languages with a different codepage really need to be encoded in Unicode, ANSI does not have enough characters. But first, let’s get it translated.

I got the translation back as a Word file and while I could have probably just asked the translator to send it as a Greek ANSI, I thought I’d give it a shot myself. The first dumb try, open file in Notepad++ and select “Convert to ANSI”. Of course, I get:

greek.INVALID_VERSION_MESSAGE=??t? t? pa??t? e??µ???s?? µp??e? ?a e??µe??se? µ??? t?? ??d?s? %1 ?a? ? d???? sa? e??a? %2.

So I google to see if it is at all possible and yes, it seems like you can encode Greek text in ANSI but unline English, which uses codepage 1252, Greek has to use 1253. Well, that doesn’t seem to be that hard, so I try again. Still the same. OK, maybe a different text editor - nope, doesn’t work either. So, now I send the UTF-8 encoded text file to the Greek translator and ask him if he can convert it into ANSI.

While I wait, I do a little more research and I stumble over a little Microsoft tool named AppLocale. At first I misunderstood the purpose, I thought it is just to switch the system locale, something you can easily do through the control Panel. But after a little more reading, I realized that this may be my solution. I can use AppLocale to open another application and AppLocale will pretend it is a localized Windows environment. So, in my case I needed to look at my Greek ANSI file on a Greek system, which I don’t have. Instead, I use AppLocale to open my text editor and with this instance of the text editor, I open my Greek file. Lo and behold, all characters come out correctly.

greek.INVALID_VERSION_MESSAGE=Αυτό το πακέτο ενημέρωσης μπορεί να ενημερώσει μόνο την έκδοση %1 και η δικιά σας είναι %2.

My file was correct all along, I just couldn’t verify it on my system. I’ll make sure to keep this little application around because I have run into this in the past and usually just ended up submitting a Unicode file and let the developers deal with it. By this time my translator had also sent me the file back and certainly, his looked just the same.

Translator 1, Greek ANSI File 0

Fun with character encodings

What do ASCII, ANSI, Latin-1, Windows-1252, Unicode and UTF have in common?

They are a pain in the neck for translators - but also, they are ways to encode characters in files, even in plain text files that usually seem as “un-encoded” as possible. Most of the time, you don’t have a problem with it, you open a txt file, you don’t really know (or need to know) what character format it has. The only reason why most people even know about this is because of the “bush hid the facts” (see below) trick in Notepad. I am not going into the history and details of the various formats, at the bottom are some links to other pages that deal with that if you want to learn more. I am merely looking at the consequences it can have for me during translation.

What I care more about is the fact that it can really break your neck during translation of string files. I run into that on and off and every time it happens, I learn a little bit more about it. I wanted to write about it since quite a while, and since the whole thing came down again earlier this week, I think it is time now.

We have a little update tool for an application that is written in Java. Java programs usually have their strings in .properties files. Those files are usually encoded in the 8-bit characters of ISO 8859-1 (aka Latin-1) which contains most “regular” characters but lacks support for language specific characters like ü Ü é or ñ. Those characters have to be converted into Unicode escape characters sometimes referred to as Java escape characters. I think most of us have experienced other escape characters, for example the \n for a new line, \t for a tab. Unicode escape characters are a little more involved, using a \uHHHH notation, where HHHH is the hex index of the character in the Unicode character set. So, for example the ß in a Java properties file has to be encoded into \u00df. To convert those characters, I use Rainbow which is part of the Okapi Framework. It has a handy Encoding Conversion Utility that allows you to convert files from one encoding to another.

Sounds really easy, right? Right? Now what is this woman complaining about again? Well, it’s not that easy. The conversion tool is designed to work with 8-bit ASCII-based encodings. Now, so what IS the problem - it was just stated that Java properties files are ASCII-based encodings? Well, TagEditor takes the ASCII file and when you “Save as Target” after translation, it converts the file into a UTF-8. And that is still not the problem, the problem is that it uses a UTF-8 format without a BOM (Byte Order Mark). The BOM is an (invisible) 2 byte sequence in the beginning of a file which basically tells a program “This is a Unicode file”. Without the BOM, some programs do not recognize the encoding of the file and assume ASCII - and that is the problem with Rainbow (and also with Passolo, a program that just got bought by SDL).

If you try to convert the encoding of a BOMless Unicode file, it goes terribly wrong. As I mentioned, the correct conversion of ß will give you \u00df. Converting a BOMless file will “double escape” the extended characters, and you get \u00c3\u0178 - clearly not the same. The “double escape” is actually a good indicator that something went wrong, if you check your file and see that your extended characters are represented by two escape sequences, you know something went wrong. Of course, that can be difficult when dealing with languages like Greek, Russian or Asian languages, simply because every single character is escaped. I usually try to find a short string and count.

Now, how do you know how a file is encoded? Right now, I use Notepad++ to check. It has a handy little Format menu and allows you to see which encoding is used and it also allows you to convert from one encoding to another. Supported formats are Windows, UNIX, Mac, ANSI, UTF-8 w/o BOM, UTF-8 and UCS-2 Big and Little Endian. Surprisingly, Windows Notepad is one of the few programs that actually manages to decipher the Unicode encoding even without a BOM, just open the BOMless file in Windows Notepad and save them without change. Unfortunately, you usually just don’t know and usually it isn’t even an issue.

I actually happen to get to talk to Yves Savourel, who is working at ENLASO and with the Okapi Framework (and about a gazillion other things related to localization), and he has been very helpful. He explained a few things to me a little better.

    The issue:

  • a BOMless UTF-8 file is recognized as “windows-1252″ encoding
  • a UTF-8 file uses two or more bytes to encode the extended characters
  • the application thinks each of those bytes is a separate character and converts each into a Unicode escape sequence
    The solution:

  • in Rainbow, manually force the encoding of the source file to UTF-8
  • in Rainbow, use the Add/Remove BOM utility to set the BOM properly

If you got through all this stuff, you may now wonder if you’ll ever run into this issue. It is also not just about BOM or not, the whole file encoding raises issues in other applications too. To be honest, I don’t know how often freelance translators are confronted with these types of files, but here are the situations where I keep my eye peeled:

  • Java files (.properties)
    This was the most recent issue that triggered this post.
  • String export files (often XML files or even plain txt)
    I tend to get the strings for REALBasic applications in XML files, though I believe they are created by RegexBuddy.
  • Non-Windows files or Windows files that will be used on other OSs
    We run into this issue with txt files the were created on a Mac and that will be used in InstallShield-type applications, for example to display the license agreement or a readme file.
  • All files
    Haha, very funny - I know. What I mean is, I have experienced various issues with files, if I have to process them through different applications in order to get CAT-translatable files, for example if we receive a weird string file that Trados doesn’t understand and where we need to find a managable way to extract translatable text.

Anyway, maybe this will help someone else in the situation where the client comes back and claims the files are corrupt or so. Otherwise, I apologize for boring the heck out of you. You should have stopped reading my post a long time ago :-)

Some interesting links with related information:

Okapi Framework
Notepad++
Bush hid the facts hoax and Bush hid the facts on Wikipedia
Mojibake
How to Determine Text File Encoding
Cast of Characters: ASCII, ANSI, UTF-8 and all that