Click here to Skip to main content
16,019,983 members
Articles / Unicode

What Every Developer Should Know About Character Encoding

Rate me:
Please Sign up or sign in to vote.
3.89/5 (7 votes)
23 Dec 2010CPOL7 min read 26.5K   19   5
If you write code that touches a text file, you probably need this.

Introduction

If you write code that touches a text file, you probably need this.

Let's start off with two key items:

  1. Unicode does not solve this issue for us (yet).
  2. Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.

Character Encoding

And let's add a codacil to this – most Americans can get by without having to take this into account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs), and because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.

The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact, we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There, of course, were numerous charactersets (or codepages) developed early on. But we ended up with almost everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.

And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.

And for a while, this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.

Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in its header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.

Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.

Now let's look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First, it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.

UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common, there's a first byte which leads to a series of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set), you can write the equivalent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.

But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course, it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.

Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.

Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc. all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Let's take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.

Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.

Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.

Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong (it also adds the endian preamble to the file.)

Ok, you're reading & writing files correctly, but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.

Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++, this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.

Wrapping It Up

I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.

History

  • 23rd December, 2010: Initial post

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Chief Technology Officer Windward Studios
United States United States
CTO/founder - Windward Studios

Comments and Discussions

 
QuestionTraditional Chinese characters aren’t being read from network stream Pin
Balaji198221-Mar-12 20:02
professionalBalaji198221-Mar-12 20:02 
Hi Guys,

I am facing this unique situation in my Application. I am having a Windows Service installed and running 24/7 on my server. This server basically listens to a server port offsite. The data received will be a mix of English characters and Traditional Chinese. I write the data received from the Offsite Server to a db table and file.

The issue is that the traditional Chinese characters aren’t being read as they have to be read. English characters are read perfectly.

I am using the following code

TcpClient clientSocket
NetworkStream networkStream = clientSocket.GetStream();

byte[] bytes = new byte[clientSocket.ReceiveBufferSize + 1];
networkStream.Read(bytes, 0, clientSocket.ReceiveBufferSize);

string clientdata = Encoding.GetEncoding(1252).GetString(bytes);

This clientdata contains the received string sent from the Offsite Server. This data is not as I excepted for Chinese characters.

I have tried using both Big5 and 1252 encoding.

Any help would be good.

Thanks
Balaji V
GeneralMy vote of 4 Pin
Trellium27-Dec-10 9:26
Trellium27-Dec-10 9:26 
QuestionWhat about the BOM? Pin
Steve Wellens27-Dec-10 3:20
Steve Wellens27-Dec-10 3:20 
Generalit like Pin
Pranay Rana23-Dec-10 21:43
professionalPranay Rana23-Dec-10 21:43 
GeneralMy vote of 3 Pin
Rozis23-Dec-10 11:57
Rozis23-Dec-10 11:57 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.