Click here to Skip to main content
15,891,607 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I need to open an existing file and write to it. The file may or may not be Unicode. Is there a way to detect the encoding so that I do not alter it when I write to it? For info, I am using _wfopen_s() to open the file.

Only append mode seems to retain the exiting encoding but I need to replace the contents of the file.

I guess I could open the file for binary and check the BOM but is there an easier/better way?

Thanks.

What I have tried:

Using append mode but I need to overwrite
Posted
Updated 19-Jan-17 6:40am

UTF-16 encoded files should always contain a Byte order mark - Wikipedia[^]. With UTF-8 files this is optional.

So you should check first for a BOM. If there is none, you might check for valid UTF-8.

With Windows you can use the MultiByteToWideChar function (Windows)[^] to do that (it must be probably called anyway to convert UTF-8 text to UTF-16 which is used by Windows).

Another option is using the ICU converter library (Using Converters - ICU User Guide[^]).

There are also some projects providing converters and check functions like UTF8-CPP: UTF-8 with C++ in a Portable Way[^].

Or write your own according to the allowed code points. I once found a sample implementation based on the Unicode recommendations but I did not find it anymore.

Note that all checks will return true (valid UTF-8) for plain ASCII files. So it might be necessary to check first for characters >= 0x80.
 
Share this answer
 
See Handling simple text files in C/C++[^]. It shows how you can identify the encoding from the first few bytes of the file.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900