ChatGPT解决这个技术问题 Extra ChatGPT

What's the difference between utf8_general_ci and utf8_unicode_ci?

Between utf8_general_ci and utf8_unicode_ci, are there any differences in terms of performance?

If you like utf8[mb4]_unicode_ci, you may like utf8[mb4]_unicode_520_ci even more.
I don't know how I feel about that - instead of fixing their implementation to follow the latest Unicode standard they keep the obsolete version as the default and people have to add "520" to use the proper one now. And it's not forwards and backwards compatible because you can't use the "520" version on older MySQL versions. Why couldn't they have just updated their existing collation? Same with "mb4", really. What code really depended on the old, limited/obsolete behaviour to justify keeping that as the default?
Still better is 8.0's default of utf8mb4_0900_ai_ci.
And 8.0 sped up utf8 comparisons significantly. (Probably all collations of utf8/utf8mb4)

t
thomasrutter

For those people still arriving at this question in 2020 or later, there are newer options that may be better than both of these. For example, utf8_unicode_520_ci.

All these collations are for the UTF-8 character encoding. The differences are in how text is sorted and compared.

_unicode_ci and _general_ci are two different sets of rules for sorting and comparing text according to the way we expect. Newer versions of MySQL introduce new sets of rules, too, such as _unicode_520_ci for equivalent rules based on Unicode 5.2, or the MySQL 8.x specific _0900_ai_ci for equivalent rules based on Unicode 9.0 (and with no equivalent _general_ci variant). People reading this now should probably use one of these newer collations instead of either _unicode_ci or _general_ci. The description of those older collations below is provided for interest only.

MySQL is currently transitioning away from an older, flawed UTF-8 implementation. For now, you need to use utf8mb4 instead of utf8 for the character encoding part, to ensure you are getting the fixed version. The flawed version remains for backward compatibility, though it is being deprecated.

Key differences

utf8mb4_unicode_ci is based on the official Unicode rules for universal sorting and comparison, which sorts accurately in a wide range of languages.

utf8mb4_general_ci is a simplified set of sorting rules which aims to do as well as it can while taking many short-cuts designed to improve speed. It does not follow the Unicode rules and will result in undesirable sorting or comparison in some situations, such as when using particular languages or characters. On modern servers, this performance boost will be all but negligible. It was devised in a time when servers had a tiny fraction of the CPU performance of today's computers.

Benefits of utf8mb4_unicode_ci over utf8mb4_general_ci

utf8mb4_unicode_ci, which uses the Unicode rules for sorting and comparison, employs a fairly complex algorithm for correct sorting in a wide range of languages and when using a wide range of special characters. These rules need to take into account language-specific conventions; not everybody sorts their characters in what we would call 'alphabetical order'.

As far as Latin (ie "European") languages go, there is not much difference between the Unicode sorting and the simplified utf8mb4_general_ci sorting in MySQL, but there are still a few differences:

For examples, the Unicode collation sorts "ß" like "ss", and "Œ" like "OE" as people using those characters would normally want, whereas utf8mb4_general_ci sorts them as single characters (presumably like "s" and "e" respectively).

Some Unicode characters are defined as ignorable, which means they shouldn't count toward the sort order and the comparison should move on to the next character instead. utf8mb4_unicode_ci handles these properly.

In non-latin languages, such as Asian languages or languages with different alphabets, there may be a lot more differences between Unicode sorting and the simplified utf8mb4_general_ci sorting. The suitability of utf8mb4_general_ci will depend heavily on the language used. For some languages, it'll be quite inadequate.

What should you use?

There is almost certainly no reason to use utf8mb4_general_ci anymore, as we have left behind the point where CPU speed is low enough that the performance difference would be important. Your database will almost certainly be limited by other bottlenecks than this.

In the past, some people recommended to use utf8mb4_general_ci except when accurate sorting was going to be important enough to justify the performance cost. Today, that performance cost has all but disappeared, and developers are treating internationalization more seriously.

There's an argument to be made that if speed is more important to you than accuracy, you may as well not do any sorting at all. It's trivial to make an algorithm faster if you do not need it to be accurate. So, utf8mb4_general_ci is a compromise that's probably not needed for speed reasons and probably also not suitable for accuracy reasons.

One other thing I'll add is that even if you know your application only supports the English language, it may still need to deal with people's names, which can often contain characters used in other languages in which it is just as important to sort correctly. Using the Unicode rules for everything helps add peace of mind that the very smart Unicode people have worked very hard to make sorting work properly.

What the parts mean

Firstly, ci is for case-insensitive sorting and comparison. This means it's suitable for textual data, and case is not important. The other types of collation are cs (case-sensitive) for textual data where case is important, and bin, for where the encoding needs to match, bit for bit, which is suitable for fields which are really encoded binary data (including, for example, Base64). Case-sensitive sorting leads to some weird results and case-sensitive comparison can result in duplicate values differing only in letter case, so case-sensitive collations are falling out of favor for textual data - if case is significant to you, then otherwise ignorable punctuation and so on is probably also significant, and a binary collation might be more appropriate.

Next, unicode or general refers to the specific sorting and comparison rules - in particular, the way text is normalized or compared. There are many different sets of rules for the utf8mb4 character encoding, with unicode and general being two that attempt to work well in all possible languages rather than one specific one. The differences between these two sets of rules are the subject of this answer. Note that unicode uses rules from Unicode 4.0. Recent versions of MySQL and MariaDB add the rulesets unicode_520 using rules from Unicode 5.2, and MySQL 8.x adds 0900 (dropping the "unicode_" part) using rules from Unicode 9.0.

And lastly, utf8mb4 is of course the character encoding used internally. In this answer I'm talking only about Unicode based encodings.


@KahWeeTeng You should never, ever use utf8_general_ci: it simply doesn’t work. It’s a throwback to the bad old days of ASCII stooopeeedity from fifty years ago. Unicode case-insensitive matching cannot be done without the foldcase map from the UCD. For example, “Σίσυφος” has three different sigmas in it; or how the lowercase of “TSCHüẞ” is “tschüβ”, but the uppercase of “tschüβ” is “TSCHÜSS”. You can be right, or you can be fast. Therefore you must use utf8_unicode_ci, because if you don’t care about correctness, then it’s trivial to make it infinitely fast.
Is Base64 encoding not just encoded as ASCII? Why would the "bin" part of the collation be relevant to Base64?
@BrianTristamWilliams the collation refers to how text comparison and sorting works. "bin" as the collation means that it's a binary comparison only: no attempt to adapt to any written language conventions will be made and it will be compared purely on the data bits.
The performance gains referenced by @nightcoder do not strike me as negligible. I don't ignore gains of 3%, and 12% is bigger, especially as any db admin makes dozens if not hundreds of choices with performance implications, and they add up. More importantly, sometimes correctness doesn't matter. Most of my databases need to accomodate unicode characters not in basic Latin encodings, but it is very rare that they need to be sorted accurately by these characters, in fact, I can't think of a single instance I've needed this in my whole 20+ year career.
I am, however, skeptical that the performance gains with real-world data would be as big as what @nightcoder claimed; that example was populated with random data. An overwhelming majority of the data in my databases is mostly characters that would exist in a Latin coding, with only occasional other characters thrown in, and those characters are almost never important in sorting. It could be that I agree with your conclusions here but for different reasons. If the performance gains are negligible with most real-world data, I'd happily choose correctness based on some hypothetical future need.
A
Alessio Cantarella

I wanted to know what is the performance difference between using utf8_general_ci and utf8_unicode_ci, but I did not find any benchmarks listed on the internet, so I decided to create benchmarks myself.

I created a very simple table with 500,000 rows:

CREATE TABLE test(
  ID INT(11) DEFAULT NULL,
  Description VARCHAR(20) DEFAULT NULL
)
ENGINE = INNODB
CHARACTER SET utf8
COLLATE utf8_general_ci;

Then I filled it with random data by running this stored procedure:

CREATE PROCEDURE randomizer()
BEGIN
  DECLARE i INT DEFAULT 0;
  DECLARE random CHAR(20) ;
  theloop: loop
    SET random = CONV(FLOOR(RAND() * 99999999999999), 20, 36);
    INSERT INTO test VALUES (i+1, random);
    SET i=i+1;
    IF i = 500000 THEN
      LEAVE theloop;
    END IF;
  END LOOP theloop;
END

Then I created the following stored procedures to benchmark simple SELECT, SELECT with LIKE, and sorting (SELECT with ORDER BY):

CREATE PROCEDURE benchmark_simple_select()
BEGIN
  DECLARE i INT DEFAULT 0;
  theloop: loop
    SELECT *
    FROM test
    WHERE Description = 'test' COLLATE utf8_general_ci;
    SET i = i + 1;
    IF i = 30 THEN
      LEAVE theloop;
    END IF;
  END LOOP theloop;
END;

CREATE PROCEDURE benchmark_select_like()
BEGIN
  DECLARE i INT DEFAULT 0;
  theloop: loop
    SELECT *
    FROM test
    WHERE Description LIKE '%test' COLLATE utf8_general_ci;
    SET i = i + 1;
    IF i = 30 THEN
      LEAVE theloop;
    END IF;
  END LOOP theloop;
END;

CREATE PROCEDURE benchmark_order_by()
BEGIN
  DECLARE i INT DEFAULT 0;
  theloop: loop
    SELECT *
    FROM test
    WHERE ID > FLOOR(1 + RAND() * (400000 - 1))
    ORDER BY Description COLLATE utf8_general_ci LIMIT 1000;
    SET i = i + 1;
    IF i = 10 THEN
      LEAVE theloop;
    END IF;
  END LOOP theloop;
END;

In the stored procedures above utf8_general_ci collation is used, but of course during the tests I used both utf8_general_ci and utf8_unicode_ci.

I called each stored procedure 5 times for each collation (5 times for utf8_general_ci and 5 times for utf8_unicode_ci) and then calculated the average values.

My results are:

benchmark_simple_select()

with utf8_general_ci: 9,957 ms

with utf8_unicode_ci: 10,271 ms

In this benchmark using utf8_unicode_ci is slower than utf8_general_ci by 3.2%.

benchmark_select_like()

with utf8_general_ci: 11,441 ms

with utf8_unicode_ci: 12,811 ms

In this benchmark using utf8_unicode_ci is slower than utf8_general_ci by 12%.

benchmark_order_by()

with utf8_general_ci: 11,944 ms

with utf8_unicode_ci: 12,887 ms

In this benchmark using utf8_unicode_ci is slower than utf8_general_ci by 7.9%.


Nice benchmark, thanks for sharing. I'm getting sensibly similar figures (MySQL v5.6.12 on Windows): 10%, 4%, 8%. I concur: the performance gain of utf8_general_ci is just too minimal to be worth using.
1) But shouldn't this benchmark generate similar results for the two collation by definition? I mean CONV(FLOOR(RAND() * 99999999999999), 20, 36) generates only ASCII, and no Unicode characters to be processed by the algorithms of the collations. 2) Description = 'test' COLLATE ... and Description LIKE 'test%' COLLATE ... only process a single string ("test") at runtime, don't they? 3) In real apps, columns used in ordering would probably be indexed, and indexing speed on different collations with real non-ASCII text might differ.
@HalilÖzgür - your point is partially wrong. I guess it's not about the codepoint value to be outside ASCII (which general_ci would handle correctly), but about specific features, like treating umlauts written as "Umleaute" or some such subtleties.
So, while these performance gains look compelling, I'm wondering if this would work with real world data. You're populating these fields with random characters, but in the real world the data has a lot more structure and the structure is relevant to sorting. Most of my databases have an overwhelming majority of characters that are in a basic Latin encoding, with a small number of other characters often in a field here or there. It's not clear that there would be any performance gains in these circumstances. Would there be? I am curious to run this on some of my real data.
i
informatik01

This post describes it very nicely.

In short: utf8_unicode_ci uses the Unicode Collation Algorithm as defined in the Unicode standards, whereas utf8_general_ci is a more simple sort order which results in "less accurate" sorting results.


If you don’t care about correctness, then it’s trivial to make any algorithm infinitely fast. Just use utf8_unicode_ci and pretend the other one doesn’t exist.
@tchrist but if you care about a certain balance between correctness and speed, utf8_general_ci may be for you
@tchrist Never become a game programmer ;)
@onassar - MySQL 8.0 claims to have significantly improve performance of all collations.
D
Dana the Sane

See the mysql manual, Unicode Character Sets section:

For any Unicode character set, operations performed using the _general_ci collation are faster than those for the _unicode_ci collation. For example, comparisons for the utf8_general_ci collation are faster, but slightly less correct, than comparisons for utf8_unicode_ci. The reason for this is that utf8_unicode_ci supports mappings such as expansions; that is, when one character compares as equal to combinations of other characters. For example, in German and some other languages “ß” is equal to “ss”. utf8_unicode_ci also supports contractions and ignorable characters. utf8_general_ci is a legacy collation that does not support expansions, contractions, or ignorable characters. It can make only one-to-one comparisons between characters.

So to summarize, utf_general_ci uses a smaller and less correct (according to the standard) set of comparisons than utf_unicode_ci which should implement the entire standard. The general_ci set will be faster because there is less computation to do.


There is no such thing as “slightly less correct”. Correctness is a boolean characteristic; it does not admit modifiers of degree. Just use utf8_unicode_ci and pretend the buggy broken version doesn’t exist.
I had problems getting 5.6.15 to take the collation_connection setting, and it turns out you have to pass it in the SET line like 'SET NAMES utf8mb4 COLLATE utf8mb4_unicode_ci'. Credit goes to Mathias Bynens for the solution, here's his very useful guide: mathiasbynens.be/notes/mysql-utf8mb4
@tchrist The problem with saying correctness is boolean is it doesn't take into account situations that don't rely on absolute correctness. Your underlying point isn't invalid nor am I attempting to espouse the benefits of general_ci, but your general statement about correctness is easily disproven. I do it on a daily basis in my profession. Comedy aside, Stuart has a good point here.
With geolocation or game development we trade correctness with performance all the time. And of course correctness is a real number between 0 and 1, not a bool. :) E.G. selecting geo points in a bounding box is an approximation of 'points nearby' which is not as good as calculating the distance between the point and the reference point and filtering on that. But both are an approximation and in fact, complete correctness is mostly not achievable. See the coastline paradox and IEEE 754
TL;DR: Please provide a program that prints the correct result for 1/3
s
simhumileco

In brief words:

If you need better sorting order - use utf8_unicode_ci (this is the preferred method),

but if you utterly interested in performance - use utf8_general_ci, but know that it is a little outdated.

The differences in terms of performance are very slight.


Both are outdated now - see accepted answer for more
K
Kamil Kiełczewski

Some details (PL)

As we can read here (Peter Gulutzan) there is difference on sorting/comparing polish letter "Ł" (L with stroke - html esc: Ł) (lower case: "ł" - html esc: ł) - we have following assumption:

utf8_polish_ci      Ł greater than L and less than M
utf8_unicode_ci     Ł greater than L and less than M
utf8_unicode_520_ci Ł equal to L
utf8_general_ci     Ł greater than Z

In polish language letter Ł is after letter L and before M. No one of this coding is better or worse - it depends of your needs.


A
Adam

There are two big difference the sorting and the character matching:

Sorting:

utf8mb4_general_ci removes all accents and sorts one by one which may create incorrect sort results.

utf8mb4_unicode_ci sorts accurate.

Character Matching

They match characters differently.

For example, in utf8mb4_unicode_ci you have i != ı, but in utf8mb4_general_ci it holds ı=i.

For example, imagine you have a row with name="Yılmaz". Then

select id from users where name='Yilmaz';

would return the row if collocation is utf8mb4_general_ci, but if it is collocated with utf8mb4_unicode_ci it would not return the row!

On the other hand we have that a=ª and ß=ss in utf8mb4_unicode_ci which is not the case in utf8mb4_general_ci. So imagine you have a row with name="ªßi", then

select id from users where name='assi';

would return the row if collocation is utf8mb4_unicode_ci, but would not return a row if collocation is set to utf8mb4_general_ci.

A full list of matches for each collocation may be found here.


D
DavidH

According to this post, there is a considerably large performance benefit on MySQL 5.7 when using utf8mb4_general_ci in stead of utf8mb4_unicode_ci: https://www.percona.com/blog/2019/02/27/charset-and-collation-settings-impact-on-mysql-performance/


It's also important to note that the analysis linked to observes that there is NOT any significant benefit for MySQL 8.0. So the answer to this question seems like it may be highly dependent on version.