ChatGPT解决这个技术问题 Extra ChatGPT

Difference between text and varchar (character varying)

What's the difference between the text data type and the character varying (varchar) data types?

According to the documentation

If character varying is used without length specifier, the type accepts strings of any size. The latter is a PostgreSQL extension.

and

In addition, PostgreSQL provides the text type, which stores strings of any length. Although the type text is not in the SQL standard, several other SQL database management systems have it as well.

So what's the difference?


r
rogerdpack

There is no difference, under the hood it's all varlena (variable length array).

Check this article from Depesz: http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/

A couple of highlights:

To sum it all up: char(n) – takes too much space when dealing with values shorter than n (pads them to n), and can lead to subtle errors because of adding trailing spaces, plus it is problematic to change the limit varchar(n) – it's problematic to change the limit in live environment (requires exclusive lock while altering table) varchar – just like text text – for me a winner – over (n) data types because it lacks their problems, and over varchar – because it has distinct name

The article does detailed testing to show that the performance of inserts and selects for all 4 data types are similar. It also takes a detailed look at alternate ways on constraining the length when needed. Function based constraints or domains provide the advantage of instant increase of the length constraint, and on the basis that decreasing a string length constraint is rare, depesz concludes that one of them is usually the best choice for a length limit.


@axiopisty It's a great article. You could just say, "Could you pull in some excerpts in case the article ever goes down?" I've tried to briefly summarize the article's content/conclusions. I hope this is enough to ease your concerns.
@axiopisty, strictly speaking, the initial answer was saying "under the hood it's all varlena", which is certainly useful information that distinguishes this answer from a link-only answer.
One thing to keep in mind with a limitless string is that they open the potential for abuse. If you allow a user to have a last name of any size, you may have someone storing LARGE amounts of info in your last name field. In an article about the development of reddit, they give the advise to "Put a limit on everything".
@MarkHildreth Good point, though generally constraints like that are enforced further out in an application these days—so that the rules (and attempted violations/retries) can be handled smoothly by the UI. If someone does still want to do this sort of thing in the database they could use constraints. See blog.jonanin.com/2013/11/20/postgresql-char-varchar which includes "an example of using TEXT and constraints to create fields with more flexibility than VARCHAR".
It is really alarming that this comment has so much votes. text should never, ever be considered "a winner over varchar" out of the box just because it allows me to input strings of any length, but exactly the opposite, you should really think about what kind of data you want to store before allowing your users to input strings of any lenght. And NO, "let the Frontend handle it" is definitely not acceptable and a very bad development practice. Really surprising to see a lot of devs doing this nowadays.
J
Jeffrey04

As "Character Types" in the documentation points out, varchar(n), char(n), and text are all stored the same way. The only difference is extra cycles are needed to check the length, if one is given, and the extra space and time required if padding is needed for char(n).

However, when you only need to store a single character, there is a slight performance advantage to using the special type "char" (keep the double-quotes — they're part of the type name). You get faster access to the field, and there is no overhead to store the length.

I just made a table of 1,000,000 random "char" chosen from the lower-case alphabet. A query to get a frequency distribution (select count(*), field ... group by field) takes about 650 milliseconds, vs about 760 on the same data using a text field.


technically the quotes aren't part of the type name. they are needed to differentiate it from the char keyword.
Technically you are correct @Jasen... Which, of course, is the best kind of correct
datatype "char" is not char?? It is valid in nowadays of PostgreSQL 11+? ... Yes: "The type "char" (note the quotes) is different from char(1) in that it only uses one byte of storage. It is internally used in the system catalogs as a simplistic enumeration type.", guide/datatype-character.
4
4 revs, 2 users 99%

(this answer is a Wiki, you can edit - please correct and improve!)

UPDATING BENCHMARKS FOR 2016 (pg9.5+)

And using "pure SQL" benchmarks (without any external script)

use any string_generator with UTF8 main benchmarks:

2.1. INSERT

2.2. SELECT comparing and counting

CREATE FUNCTION string_generator(int DEFAULT 20,int DEFAULT 10) RETURNS text AS $f$
  SELECT array_to_string( array_agg(
    substring(md5(random()::text),1,$1)||chr( 9824 + (random()*10)::int )
  ), ' ' ) as s
  FROM generate_series(1, $2) i(x);
$f$ LANGUAGE SQL IMMUTABLE;

Prepare specific test (examples)

DROP TABLE IF EXISTS test;
-- CREATE TABLE test ( f varchar(500));
-- CREATE TABLE test ( f text); 
CREATE TABLE test ( f text  CHECK(char_length(f)<=500) );

Perform a basic test:

INSERT INTO test  
   SELECT string_generator(20+(random()*(i%11))::int)
   FROM generate_series(1, 99000) t(i);

And other tests,

CREATE INDEX q on test (f);

SELECT count(*) FROM (
  SELECT substring(f,1,1) || f FROM test WHERE f<'a0' ORDER BY 1 LIMIT 80000
) t;

... And use EXPLAIN ANALYZE.

UPDATED AGAIN 2018 (pg10)

little edit to add 2018's results and reinforce recommendations.

Results in 2016 and 2018

My results, after average, in many machines and many tests: all the same (statistically less than standard deviation).

Recommendation

Use text datatype, avoid old varchar(x) because sometimes it is not a standard, e.g. in CREATE FUNCTION clauses varchar(x)≠varchar(y).

express limits (with same varchar performance!) by with CHECK clause in the CREATE TABLE e.g. CHECK(char_length(x)<=10). With a negligible loss of performance in INSERT/UPDATE you can also to control ranges and string structure e.g. CHECK(char_length(x)>5 AND char_length(x)<=20 AND x LIKE 'Hello%')


So it does not matter than I made all of my columns varchar instead of text? I did not specify the length even though some are only 4 - 5 characters and certainly not 255.
@trench yes, it does not matter
cool, I redid it to be safe and I made everything text anyway. It worked well and it was super easy to add millions of historical records quickly anyways.
@trench and reader: the only exception is the faster datatype "char", that is not char, even in nowadays of PostgreSQL 11+. As the guide/datatype-character says "The type "char" (note the quotes) is different from char(1) in that it only uses one byte of storage. It is internally used in the system catalogs as a simplistic enumeration type.".
still valid with pg11 in 2019: text>varchar(n)>text_check>char(n)
a
a_horse_with_no_name

On PostgreSQL manual

There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.

I usually use text

References: http://www.postgresql.org/docs/current/static/datatype-character.html


s
sotn

In my opinion, varchar(n) has it's own advantages. Yes, they all use the same underlying type and all that. But, it should be pointed out that indexes in PostgreSQL has its size limit of 2712 bytes per row.

TL;DR: If you use text type without a constraint and have indexes on these columns, it is very possible that you hit this limit for some of your columns and get error when you try to insert data but with using varchar(n), you can prevent it.

Some more details: The problem here is that PostgreSQL doesn't give any exceptions when creating indexes for text type or varchar(n) where n is greater than 2712. However, it will give error when a record with compressed size of greater than 2712 is tried to be inserted. It means that you can insert 100.000 character of string which is composed by repetitive characters easily because it will be compressed far below 2712 but you may not be able to insert some string with 4000 characters because the compressed size is greater than 2712 bytes. Using varchar(n) where n is not too much greater than 2712, you're safe from these errors.


Later postgres errors on trying to create indexing for text only works for varchar (version without the (n)). Only tested with embedded postgres though.
Refering to : stackoverflow.com/questions/39965834/… which has a link to PostgreSQL Wiki: wiki.postgresql.org/wiki/… has max Row size as 400GB, from that it looks like the stated 2712 byte limit per row is wrong. Maximum size for a database? unlimited (32 TB databases exist) Maximum size for a table? 32 TB Maximum size for a row? 400 GB Maximum size for a field? 1 GB Maximum number of rows in a table? unlimited
@BillWorthington The numbers you posted don't take into account of putting indexes though. 2712 byte is about btree's max limits, it's an implementation detail so that you can't find it on the documents. However, you can easily test it yourself or just google it by searching "postgresql index row size exceeds maximum 2712 for index" e.g..
I am new to PostgeSQL, so am not the expert. I am working on a project where I want to store news articles in a column in a table. Looks like the text column type is what I will use. A total row size of 2712 bytes sounds way too low for a database that is suppose to be close to the same level as Oracle. Do I understand you correctly that you are referring to indexing a large text field? Not trying to challenge or argue with you, just trying to understand the real limits. If there are no indexes involved, then would the row limit be 400GB as in the wiki?? Thanks for your fast response.
@BillWorthington You should research about Full Text Search. Check this link e.g.
b
bpd

text and varchar have different implicit type conversions. The biggest impact that I've noticed is handling of trailing spaces. For example ...

select ' '::char = ' '::varchar, ' '::char = ' '::text, ' '::varchar = ' '::text

returns true, false, true and not true, true, true as you might expect.


How is this possible? If a = b and a = c then b = c.
Tested, and it is indeed true. Impossible, but true. Very, very strange.
It's because the = operator is not only comparing the stuff, but it also does some conversions to find a common type for the values. It's pretty common behaviour in various languages, and the used conversions also differ between languages. For example in JavaScript you can see that [0 == '0.0', 0 == '0', '0.0' == '0'] -> [true, true, false]
G
Greg

Somewhat OT: If you're using Rails, the standard formatting of webpages may be different. For data entry forms text boxes are scrollable, but character varying (Rails string) boxes are one-line. Show views are as long as needed.


C
Chris Halcrow

A good explanation from http://www.sqlines.com/postgresql/datatypes/text:

The only difference between TEXT and VARCHAR(n) is that you can limit the maximum length of a VARCHAR column, for example, VARCHAR(255) does not allow inserting a string more than 255 characters long. Both TEXT and VARCHAR have the upper limit at 1 Gb, and there is no performance difference among them (according to the PostgreSQL documentation).


D
Dharman

I wasted way too much time because of using varchar instead of text for PostgreSQL arrays.

PostgreSQL Array operators do not work with string columns. Refer these links for more details: (https://github.com/rails/rails/issues/13127) and (http://adamsanderson.github.io/railsconf_2013/?full#10).


Ran into the exact same problem...
i
ivansabik

If you only use TEXT type you can run into issues when using AWS Database Migration Service:

Large objects (LOBs) are used but target LOB columns are not nullable

Due to their unknown and sometimes large size, large objects (LOBs) require more processing and resources than standard objects. To help with tuning migrations of systems that contain LOBs, AWS DMS offers the following options

If you are only sticking to PostgreSQL for everything probably you're fine. But if you are going to interact with your db via ODBC or external tools like DMS you should consider using not using TEXT for everything.


o
ofir_aghai

character varying(n), varchar(n) - (Both the same). value will be truncated to n characters without raising an error.

character(n), char(n) - (Both the same). fixed-length and will pad with blanks till the end of the length.

text - Unlimited length.

Example:

Table test:
   a character(7)
   b varchar(7)

insert "ok    " to a
insert "ok    " to b

We get the results:

a        | (a)char_length | b     | (b)char_length
----------+----------------+-------+----------------
"ok     "| 7              | "ok"  | 2

While MySQL will silently truncate the data when the value exceeds the column size, PostgreSQL will not and will raise a "value too long for type character varying(n)" error.
@gsiems Neither will truncate. MSSQL will throw an exception (msg 8152, level 16, state 30: String or binary data would be truncated). PostgreSQL will do the same, EXCEPT if the overflow is only spaces (then, it will truncate without raising an exception)
@JCKödel gsiems was talking about MySQL, not MSSQL.