ChatGPT解决这个技术问题 Extra ChatGPT

C programming: How to program for Unicode?

What prerequisites are needed to do strict Unicode programming?

Does this imply that my code should not use char types anywhere and that functions need to be used that can deal with wint_t and wchar_t?

And what is the role played by multibyte character sequences in this scenario?


J
Jonathan Leffler

C99 or earlier

The C standard (C99) provides for wide characters and multi-byte characters, but since there is no guarantee about what those wide characters can hold, their value is somewhat limited. For a given implementation, they provide useful support, but if your code must be able to move between implementations, there is insufficient guarantee that they will be useful.

Consequently, the approach suggested by Hans van Eck (which is to write a wrapper around the ICU - International Components for Unicode - library) is sound, IMO.

The UTF-8 encoding has many merits, one of which is that if you do not mess with the data (by truncating it, for example), then it can be copied by functions that are not fully aware of the intricacies of UTF-8 encoding. This is categorically not the case with wchar_t.

Unicode in full is a 21-bit format. That is, Unicode reserves code points from U+0000 to U+10FFFF.

One of the useful things about the UTF-8, UTF-16 and UTF-32 formats (where UTF stands for Unicode Transformation Format - see Unicode) is that you can convert between the three representations without loss of information. Each can represent anything the others can represent. Both UTF-8 and UTF-16 are multi-byte formats.

UTF-8 is well known to be a multi-byte format, with a careful structure that makes it possible to find the start of characters in a string reliably, starting at any point in the string. Single-byte characters have the high-bit set to zero. Multi-byte characters have the first character starting with one of the bit patterns 110, 1110 or 11110 (for 2-byte, 3-byte or 4-byte characters), with subsequent bytes always starting 10. The continuation characters are always in the range 0x80 .. 0xBF. There are rules that UTF-8 characters must be represented in the minimum possible format. One consequence of these rules is that the bytes 0xC0 and 0xC1 (also 0xF5..0xFF) cannot appear in valid UTF-8 data.

 U+0000 ..   U+007F  1 byte   0xxx xxxx
 U+0080 ..   U+07FF  2 bytes  110x xxxx   10xx xxxx
 U+0800 ..   U+FFFF  3 bytes  1110 xxxx   10xx xxxx   10xx xxxx
U+10000 .. U+10FFFF  4 bytes  1111 0xxx   10xx xxxx   10xx xxxx   10xx xxxx

Originally, it was hoped that Unicode would be a 16-bit code set and everything would fit into a 16-bit code space. Unfortunately, the real world is more complex, and it had to be expanded to the current 21-bit encoding.

UTF-16 thus is a single unit (16-bit word) code set for the 'Basic Multilingual Plane', meaning the characters with Unicode code points U+0000 .. U+FFFF, but uses two units (32-bits) for characters outside this range. Thus, code that works with the UTF-16 encoding must be able to handle variable width encodings, just like UTF-8 must. The codes for the double-unit characters are called surrogates.

Surrogates are code points from two special ranges of Unicode values, reserved for use as the leading, and trailing values of paired code units in UTF-16. Leading, also called high, surrogates are from U+D800 to U+DBFF, and trailing, or low, surrogates are from U+DC00 to U+DFFF. They are called surrogates, since they do not represent characters directly, but only as a pair.

UTF-32, of course, can encode any Unicode code point in a single unit of storage. It is efficient for computation but not for storage.

You can find a lot more information at the ICU and Unicode web sites.

C11 and

The C11 standard changed the rules, but not all implementations have caught up with the changes even now (mid-2017). The C11 standard summarizes the changes for Unicode support as:

Unicode characters and strings () (originally specified in ISO/IEC TR 19769:2004)

What follows is a bare minimal outline of the functionality. The specification includes:

6.4.3 Universal character names Syntax universal-character-name: \u hex-quad \U hex-quad hex-quad hex-quad: hexadecimal-digit hexadecimal-digit hexadecimal-digit hexadecimal-digit 7.28 Unicode utilities The header declares types and functions for manipulating Unicode characters. The types declared are mbstate_t (described in 7.29.1) and size_t (described in 7.19); char16_t which is an unsigned integer type used for 16-bit characters and is the same type as uint_least16_t (described in 7.20.1.2); and char32_t which is an unsigned integer type used for 32-bit characters and is the same type as uint_least32_t (also described in 7.20.1.2).

(Translating the cross-references: <stddef.h> defines size_t, <wchar.h> defines mbstate_t, and <stdint.h> defines uint_least16_t and uint_least32_t.) The <uchar.h> header also defines a minimal set of (restartable) conversion functions:

mbrtoc16() c16rtomb() mbrtoc32() c32rtomb()

There are rules about which Unicode characters can be used in identifiers using the \unnnn or \U00nnnnnn notations. You may have to actively activate the support for such characters in identifiers. For example, GCC requires -fextended-identifiers to allow these in identifiers.

Note that macOS Sierra (10.12.5), to name but one platform, does not support <uchar.h>.


I think you are selling wchar_t and friends a bit short here. These types are essential in order to allow the C library to handle text in any encoding (including non-Unicode encodings). Without the wide character types and functions, the C library would require a set of text-handling functions for every supported encoding: imagine having koi8len, koi8tok, koi8printf just for KOI-8 encoded text, and utf8len, utf8tok, utf8printf for UTF-8 text. Instead, we are lucky to have just one set of these functions (not counting the original ASCII ones): wcslen, wcstok, and wprintf.
All a programmer needs to do is use the C library character conversion functions (mbstowcs and friends) to convert any supported encoding to wchar_t. Once in wchar_t format, the programmer can use the single set of wide text handling functions the C library provides. A good C library implementation will support virtually any encoding most programmers will ever need (on one of my systems, I have access to 221 unique encodings).
As far as whether they will be wide enough to be useful: the standard requires an implementation must guarantee that wchar_t is wide enough to contain any character supported by the implementation. This means (with possibly one notable exception) most implementations will ensure that they are wide enough that a program that uses wchar_t will handle any encoding supported by the system (Microsoft's wchar_t is only 16-bits wide which means their implementation does not fully support all encodings, most notably the various UTF encodings, but theirs is the exception not the rule).
H
Hans van Eck

Note that this is not about "strict unicode programming" per se, but some practical experience.

What we did at my company was to create a wrapper library around IBM's ICU library. The wrapper library has a UTF-8 interface and converts to UTF-16 when it is necessary to call ICU. In our case, we did not worry too much about performance hits. When performance was an issue, we also supplied UTF-16 interfaces (using our own datatype).

Applications could remain largely as-is (using char), although in some cases they need to be aware of certain issues. For instance, instead of strncpy() we use a wrapper which avoids cutting off UTF-8 sequences. In our case, this is sufficient, but one could also consider checks for combining characters. We also have wrappers for counting the number of codepoints, the number of graphemes, etc.

When interfacing with other systems, we sometimes need to do custom character composition, so you may need some flexibility there (depending on your application).

We do not use wchar_t. Using ICU avoids unexpected issues in portability (but not other unexpected issues, of course :-).


A valid UTF-8 byte sequence would never be cut off (truncated) by strncpy. Valid UTF-8 sequences may not contain any 0x00 bytes (except for the terminating null byte, of course).
@Dan Moulding: if you strncpy(), say, a string containing a single chinese character (which may be 3 bytes) into a 2-byte char array, you create an invalid UTF-8 sequence.
@Hans van Eck: If your wrapper copies that single 3-byte chinese character into a 2-byte array, then you're either going to truncate it and create an invalid sequence, or you're going to have undefined behavior. Obviously, if you are copying data around, the target needs to be big enough; that goes without saying. My point was that strncpy used properly is perfectly safe to use with UTF-8.
@DanMoulding: If you know that your target buffer is big enough, you can just use strcpy (which is indeed safe to use with UTF-8). People using strncpy probably do so because they don't know whether the target buffer is big enough, so they want to pass a maximum number of bytes to copy - which may indeed create invalid UTF-8 sequences.
G
Gaurang Tandon

This FAQ is a wealth of info. Between that page and this article by Joel Spolsky, you'll have a good start.

One conclusion I came to along the way:

wchar_t is 16 bits on Windows, but not necessarily 16 bits on other platforms. I think it's a necessary evil on Windows, but probably can be avoided elsewhere. The reason it's important on Windows is that you need it to use files that have non-ASCII characters in the name (along with the W version of functions).

Note that Windows APIs that take wchar_t strings expect UTF-16 encoding. Note also that this is different than UCS-2. Take note of surrogate pairs. This test page has enlightening tests.

If you're programming on Windows, you can't use fopen(), fread(), fwrite(), etc. since they only take char * and don't understand UTF-8 encoding. Makes portability painful.


Note that stdio f* and friends work with char * on every platform because the standard says so -- use wcs* instead for wchar_t.
Note that Spolsky's article mostly remains valid. However, it claims that UTF-8 could use up to 6 bytes for a single character. In practice, Unicode limits the range of code points to U+0000 .. U+10FFFF. And all those characters can be encoded in 1-4 bytes in UTF-8. As a consequence of the encoding rules, bytes 0xC0, 0xC1, 0xF5-0xFF cannot appear in valid UTF-8.
a
approxiblue

To do strict Unicode programming:

Only use string APIs that are Unicode aware (NOT strlen, strcpy, ... but their widestring counterparts wstrlen, wsstrcpy, ...)

When dealing with a block of text, use an encoding that allows storing Unicode chars (utf-7, utf-8, utf-16, ucs-2, ...) without loss.

Check that your OS default character set is Unicode compatible (ex: utf-8)

Use fonts that are Unicode compatible (e.g. arial_unicode)

Multi-byte character sequences is an encoding that pre-dates the UTF-16 encoding (the one used normally with wchar_t) and it seems to me it is rather Windows-only.

I've never heard of wint_t.


wint_t is a type defined in , just like wchar_t is. It has the same role with respect to wide characters that int has with respect to 'char'; it can hold any wide character value or WEOF.
C
Community

The most important thing is to always make a clear distinction between text and binary data. Try to follow the model of Python 3.x str vs. bytes or SQL TEXT vs. BLOB.

Unfortunately, C confuses the issue by using char for both "ASCII character" and int_least8_t. You'll want to do something like:

typedef char UTF8; // for code units of UTF-8 strings
typedef unsigned char BYTE; // for binary data

You might want typedefs for UTF-16 and UTF-32 code units too, but this is more complicated because the encoding of wchar_t is not defined. You'll need to just a preprocessor #ifs. Some useful macros in C and C++0x are:

__STDC_UTF_16__ — If defined, the type _Char16_t exists and is UTF-16.

__STDC_UTF_32__ — If defined, the type _Char32_t exists and is UTF-32.

__STDC_ISO_10646__ — If defined, then wchar_t is UTF-32.

_WIN32 — On Windows, wchar_t is UTF-16, even though this breaks the standard.

WCHAR_MAX — Can be used to determine the size of wchar_t, but not whether the OS uses it to represent Unicode.

Does this imply that my code should not use char types anywhere and that functions need to be used that can deal with wint_t and wchar_t?

See also:

UTF-8 or UTF-16 or UTF-32 or UCS-2

Is wchar_t needed for Unicode support?

No. UTF-8 is a perfectly valid Unicode encoding that uses char* strings. It has the advantage that if your program is transparent to non-ASCII bytes (e.g., a line ending converter which acts on \r and \n but passes through other characters unchanged), you'll need to make no changes at all!

If you go with UTF-8, you'll need to change all the assumptions that char = character (e.g., don't call toupper in a loop) or char = screen column (e.g., for text wrapping).

If you go with UTF-32, you'll have the simplicity of fixed-width characters (but not fixed-width graphemes, but will need to change the type of all of your strings).

If you go with UTF-16, you'll have to discard both the assumption of fixed-width characters and the assumption of 8-bit code units, which makes this the most difficult upgrade path from single-byte encodings.

I would recommend actively avoiding wchar_t because it's not cross-platform: Sometimes it's UTF-32, sometimes it's UTF-16, and sometimes its a pre-Unicode East Asian encoding. I'd recommend using typedefs

Even more importantly, avoid TCHAR.


I don't think that that's unfortunate at all - the char being an int. That's a benefit. Using literal character constants come to mind as one use. And functions that take a char * can have problems if passed a const char * last I remember (but I'm vague on this and which functions so take it with a pinch of salt). Just because it is more complicated with other languages doesn't mean it's a bad design.
Since plain char can be signed, using plain char for UTF8 risks problems with sign extension. Use unsigned char for UTF8 too — or uint8_t.
P
PolyThinker

From what I know, wchar_t is implementation dependent (as can be seen from this wiki article). And it's not unicode.


佚名

I wouldn't trust any standard library implementation. Just roll your own unicode types.

#include <windows.h>

typedef unsigned char utf8_t;
typedef unsigned short utf16_t;
typedef unsigned long utf32_t;

int main ( int argc, char *argv[] )
{
  int msgBoxId;
  utf16_t lpText[] = { 0x03B1, 0x0009, 0x03B2, 0x0009, 0x03B3, 0x0009, 0x03B4, 0x0000 };
  utf16_t lpCaption[] = L"Greek Characters";
  unsigned int uType = MB_OK;
  msgBoxId = MessageBoxW( NULL, lpText, lpCaption, uType );
  return 0;
}

C
Chris Tang

You basically want to deal with strings in memory as wchar_t arrays instead of char. When you do any kind of I/O (like reading/writing files) you can encode/decode using UTF-8 (this is probably the most common encoding) which is simple enough to implement. Just google the RFCs. So in-memory nothing should be multi-byte. One wchar_t represents one character. When you come to serializing however, that's when you need to encode to something like UTF-8 where some characters are represented by multiple bytes.

You'll also have to write new versions of strcmp etc. for the wide character strings, but this isn't a big issue. The biggest problem will be interop with libraries/existing code that only accept char arrays.

And when it comes to sizeof(wchar_t) (you will need 4 bytes if you want to do it right) you can always redefine it to a larger size with typedef/macro hacks if you need to.