ChatGPT解决这个技术问题 Extra ChatGPT

What is the maximum number of bytes for a UTF-8 encoded character?

What is the maximum number of bytes for a single UTF-8 encoded character?

I'll be encrypting the bytes of a String encoded in UTF-8 and therefore need to be able to work out the maximum number of bytes for a UTF-8 encoded String.

Could someone confirm the maximum number of bytes for a single UTF-8 encoded character please

You did look at common resources, such as Wikipedia's UTF-8 Article, first ... right?
I read several articles which gave mixed answers... I actually got the impression the answer was 3 so I'm very glad I asked
I will leave a youtube link here, featuring Tom Scott's Characters, Symbols, Unicode miracle: goo.gl/sUr1Hf. You get to hear and see how everything's being evolved from ASCII character encoding to utf-8.

C
Community

The maximum number of bytes per character is 4 according to RFC3629 which limited the character table to U+10FFFF:

In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16 accessible range) are encoded using sequences of 1 to 4 octets.

(The original specification allowed for up to six byte character codes for code points past U+10FFFF.)

Characters with a code less than 128 will require 1 byte only, and the next 1920 character codes require 2 bytes only. Unless you are working with an esoteric language, multiplying the character count by 4 will be a significant overestimation.


What is "esotheric language" for you? Any language which would exist in the real-world, or a text which switches between different languages of the world? Should a developer of an UTF-8-to-String function choose 2, 3 or 4 as multiplicator if he does a over-allocation and the downsizes the result after the actual convertion?
@rinntech by 'esoteric language' he means a language that has a lot of high value unicode chars (something from near the bottom of this list: unicode-table.com/en/sections ). If you must over-allocate, choose 4. You could do a double pass, one to see how many bytes you'll need and allocate, then another to do the encoding; that may be better than allocating ~4 times the RAM needed.
Always try to handle worst case: hacker9.com/single-message-can-crash-whatsapp.html
CJKV characters mostly take 3 bytes (with some rare/archaic characters taking 4 bytes) and calling them esoteric is a bit of a stretch (China alone is almost 20% of the world's population...).
Why was it limited to 4 when it was previously 6? What stops us from continuing the standard and having a lead byte of 11111111 and having a 2^(6*7) bit space for characters?
C
Community

Without further context, I would say that the maximum number of bytes for a character in UTF-8 is

answer: 6 bytes

The author of the accepted answer correctly pointed this out as the "original specification". That was valid through RFC-2279 1. As J. Cocoe pointed out in the comments below, this changed in 2003 with RFC-3629 2, which limits UTF-8 to encoding for 21 bits, which can be handled with the encoding scheme using four bytes.

answer if covering all unicode: 4 bytes

But, in Java <= v7, they talk about a 3-byte maximum for representing unicode with UTF-8? That's because the original unicode specification only defined the basic multi-lingual plane (BMP), i.e. it is an older version of unicode, or subset of modern unicode. So

answer if representing only original unicode, the BMP: 3 bytes

But, the OP talks about going the other way. Not from characters to UTF-8 bytes, but from UTF-8 bytes to a "String" of bytes representation. Perhaps the author of the accepted answer got that from the context of the question, but this is not necessarily obvious, so may confuse the casual reader of this question.

Going from UTF-8 to native encoding, we have to look at how the "String" is implemented. Some languages, like Python >= 3 will represent each character with integer code points, which allows for 4 bytes per character = 32 bits to cover the 21 we need for unicode, with some waste. Why not exactly 21 bits? Because things are faster when they are byte-aligned. Some languages like Python <= 2 and Java represent characters using a UTF-16 encoding, which means that they have to use surrogate pairs to represent extended unicode (not BMP). Either way that's still 4 bytes maximum.

answer if going UTF-8 -> native encoding: 4 bytes

So, final conclusion, 4 is the most common right answer, so we got it right. But, mileage could vary.


"this is still the current and correct specification, per wikipedia" -- not any more. Shortly after you wrote this (April 2nd edit), Wikipedia's UTF-8 article was changed to clarify that the 6-octet version isn't part of the current (2003) UTF-8 spec.
"But, in Java <= v7, they talk about a 3-byte maximum for representing unicode with UTF-8? That's because the original unicode specification only defined the basic multi-lingual plane" -- That is probably the original reason, but it's not the whole story. Java uses "modified UTF-8", and one of the modifications is that it "uses its own two-times-three-byte format" instead of "the four-byte format of standard UTF-8" (their words).
There are no codepoints allocated above the 10FFFF (just over a million) limit and many of the UTF8 implementations never implemented sequences longer than 4 bytes (and some only 3, eg MySQL) so I would consider it safe to hard limit to 4 bytes per codepoint even when considering compatibility with older implementations. You would just need to ensure you discard anything invalid on the way in. Note that matiu's recommendation of allocating after calculating exact byte length is a good one where possible.
"... [U]nicode can represent up to x10FFFF code points. So, including 0, that means we can do it with these bytes: F FF FF, i.e. two-and-a-half bytes, or 20 bits." I believe this is a bit incorrect. The number of code points from 0x0 through 0x10FFFF would be 0x110000, which could be represented in 1F FF FF, or 21 bits. The 0x110000 number corresponds to the 17 planes of 0x10000 code points each.
PSA: Wikipedia is not a real source. Look at the article's actual references.
D
David Spector

The maximum number of bytes to support US-ASCII, a standard English alphabet encoding, is 1. But limiting text to English is becoming less desirable or practical as time goes by.

Unicode was designed to represent the glyphs of all human languages, as well as many kinds of symbols, with a variety of rendering characteristics. UTF-8 is an efficient encoding for Unicode, although still biased toward English. UTF-8 is self-synchronizing: character boundaries are easily identified by scanning for well-defined bit patterns in either direction.

While the maximum number of bytes per UTF-8 character is 3 for supporting just the 2-byte address space of Plane 0, the Basic Multilingual Plane (BMP), which can be accepted as minimal support in some applications, it is 4 for supporting all 17 current planes of Unicode (as of 2019). It should be noted that many popular "emoji" characters are likely to be located in Plane 16, which requires 4 bytes.

However, this is just for basic character glyphs. There are also various modifiers, such as making accents appear over the previous character, and it is also possible to link together an arbitrary number of code points to construct one complex "grapheme". In real world programming, therefore, the use or assumption of a fixed maximum number of bytes per character will likely eventually result in a problem for your application.

These considerations imply that UTF-8 character strings should not "expanded" into arrays of fixed length prior to processing, as has sometimes been done. Instead, programming should be done directly, using string functions specifically designed for UTF-8.


Note: the paragraph about not using a fixed-width array of characters is my own opinion. I'm willing to edit this answer in response to comments.
Also note that Klingon is in unicode too, so it's not just all human language. As for your recommendation, it will all come down to what you're optimizing for and what benchmarks tell you. Sometimes, it's faster to rip through a known number of bytes without conditional logic or branching. Branching can harm performance severely. If you preprocessed it, you'd have to do the branching still, but at least the heavier computation stuff would be ripping through contiguous memory without zero branches. If you want to optimize for space, it's not a good idea though.
Klingon is a human language, meaning that it was designed by Marc Okrand and other humans to achieve human purposes. Klingon is not an extraterrestrial language, since the planet Klingon does not exist. As to your apparent defense of the common practice of using six-byte arrays for internal handling of characters, we will have to agree to disagree. Such limits are bugs.
With UTF encoding, the max number of bytes is 4. Depending on the symbols used, you can get away with 1 byte (e.g. English with punctuation) or 2 bytes (If you know there aren't emoji, Chinese, Japanese, etc.). The advantage of preprocessing comes into play more strongly if you run algorithms on the text multiple times. Otherwise, you will have a bunch of branching each time you run an algorithm (although your CPU's branch detector will help a lot if the symbols used result in predictable branching). I didn't say preprocessing is better, only that it can be and testing is needed.
The minimum number of bytes needed when using a fixed-length array is 6 if you wish to encode emoji, which are quire popular these days. In my own coding, I have found that there is no need to program using fixed-length arrays at all. Whatever you are trying to do can probably be achieved using either byte-oriented programming or by obtaining the actual character length by scanning the UTF-8 bytes.
N
Nikita Zlobin

Condidering just technical limitations - it's possible to have up to 7 bytes following current UTF8 encoding scheme. According to it - if first byte is not self-sufficient ASCII character, than it should have pattern: 1(n)0X(7-n), where n is <= 7.

Also theoretically it could be 8 but then first byte would have no zero bit at all. While other aspects, like continuation byte differing from leading, are still there (allowing error detection), I heared, that byte 11111111 could be invalid, but I can't be sure about that.

Limitatation for max 4 bytes is most likely for compatibility with UTF-16, which I tend to consider a legacy, because the only quality where it excels, is processing speed, but only if string byte order matches (i.e. we read 0xFEFF in the BOM).