ChatGPT解决这个技术问题 Extra ChatGPT

unsigned int vs. size_t

I notice that modern C and C++ code seems to use size_t instead of int/unsigned int pretty much everywhere - from parameters for C string functions to the STL. I am curious as to the reason for this and the benefits it brings.


C
Craig M. Brandenburg

The size_t type is the unsigned integer type that is the result of the sizeof operator (and the offsetof operator), so it is guaranteed to be big enough to contain the size of the biggest object your system can handle (e.g., a static array of 8Gb).

The size_t type may be bigger than, equal to, or smaller than an unsigned int, and your compiler might make assumptions about it for optimization.

You may find more precise information in the C99 standard, section 7.17, a draft of which is available on the Internet in pdf format, or in the C11 standard, section 7.19, also available as a pdf draft.


Nope. Think of x86-16 with the large (not huge) memory model: Pointers are far (32-bit), but individual objects are limited to 64k (so size_t can be 16-bit).
"size of the biggest object" is not poor wording, but absolutely correct. The sixe of an object can be much more limited than the address space.
"your compiler might make assumption about it": I would hope the compiler knows the exact range of values that size_t can represent! If it doesn't, who does?
@Marc: I think the point was more that the compiler might be able to do something with that knowledge.
I just wish this increasingly popular type didn't require the inclusion of a header file.
S
StaceyGirl

Classic C (the early dialect of C described by Brian Kernighan and Dennis Ritchie in The C Programming Language, Prentice-Hall, 1978) didn't provide size_t. The C standards committee introduced size_t to eliminate a portability problem

Explained in detail at embedded.com (with a very good example)


Another great article explaining both size_t and ptrdiff_t: viva64.com/en/a/0050
S
StaceyGirl

In short, size_t is never negative, and it maximizes performance because it's typedef'd to be the unsigned integer type that's big enough -- but not too big -- to represent the size of the largest possible object on the target platform.

Sizes should never be negative, and indeed size_t is an unsigned type. Also, because size_t is unsigned, you can store numbers that are roughly twice as big as in the corresponding signed type, because we can use the sign bit to represent magnitude, like all the other bits in the unsigned integer. When we gain one more bit, we are multiplying the range of numbers we can represents by a factor of about two.

So, you ask, why not just use an unsigned int? It may not be able to hold big enough numbers. In an implementation where unsigned int is 32 bits, the biggest number it can represent is 4294967295. Some processors, such as the IP16L32, can copy objects larger than 4294967295 bytes.

So, you ask, why not use an unsigned long int? It exacts a performance toll on some platforms. Standard C requires that a long occupy at least 32 bits. An IP16L32 platform implements each 32-bit long as a pair of 16-bit words. Almost all 32-bit operators on these platforms require two instructions, if not more, because they work with the 32 bits in two 16-bit chunks. For example, moving a 32-bit long usually requires two machine instructions -- one to move each 16-bit chunk.

Using size_t avoids this performance toll. According to this fantastic article, "Type size_t is a typedef that's an alias for some unsigned integer type, typically unsigned int or unsigned long, but possibly even unsigned long long. Each Standard C implementation is supposed to choose the unsigned integer that's big enough--but no bigger than needed--to represent the size of the largest possible object on the target platform."


Sorry to comment on this after so long, but I just had to confirm the biggest number that an unsigned int can hold - perhaps I'm misunderstanding your terminology, but I thought that the biggest number an unsigned int can hold is 4294967295, 65356 being the maximum of an unsigned short.
If your unsigned int occupies 32 bits, then yes, the biggest number it can hold is 2^32 - 1, which is 4294967295 (0xffffffff). Do you have another question?
@Mitch: The largest value that can be represented in an unsigned int can and does vary from one system to another. It's required to be at least 65536, but it's commonly 4294967295 and could be 18446744073709551615 (2**64-1) on some systems.
The largest value a 16 bit unsigned int can contain is 65535, not 65536. A small but important difference as 65536 is the same as 0 in a 16 bit unsigned int.
@gnasher729: Are you sure about the C++ standard? Having searched for some time I am under the impression that they simply removed all absolute guarantees about integer ranges (excluding unsigned char). The standard does not seem to contain the string '65535' or '65536' anywhere, and '+32767' only occurs (1.9:9) in a note as possible largest integer representable in int; no guarantee is given even that INT_MAX cannot be smaller than that!
K
Kevin S.

The size_t type is the type returned by the sizeof operator. It is an unsigned integer capable of expressing the size in bytes of any memory range supported on the host machine. It is (typically) related to ptrdiff_t in that ptrdiff_t is a signed integer value such that sizeof(ptrdiff_t) and sizeof(size_t) are equal.

When writing C code you should always use size_t whenever dealing with memory ranges.

The int type on the other hand is basically defined as the size of the (signed) integer value that the host machine can use to most efficiently perform integer arithmetic. For example, on many older PC type computers the value sizeof(size_t) would be 4 (bytes) but sizeof(int) would be 2 (byte). 16 bit arithmetic was faster than 32 bit arithmetic, though the CPU could handle a (logical) memory space of up to 4 GiB.

Use the int type only when you care about efficiency as its actual precision depends strongly on both compiler options and machine architecture. In particular the C standard specifies the following invariants: sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) placing no other limitations on the actual representation of the precision available to the programmer for each of these primitive types.

Note: This is NOT the same as in Java (which actually specifies the bit precision for each of the types 'char', 'byte', 'short', 'int' and 'long').


the de facto definition of int is that it's 16 bit on 16 machines and 32 bit on anything larger. Too much code has been written which assume that int is 32 bits wide, to change this now and as a result people should always use size_t or {,u}int{8,16,32,64}_t if they want something specific -- as a precaution, people should just always use these, instead of the integral integer types.
"It is an unsigned integer capable of expressing the size in bytes of any memory range supported on the host machine." --> No. size_t is capable of representing the size of any single object (e.g.: number, array, structure). The entire memory range may exceed size_t
"When writing C code you should always use size_t whenever dealing with memory ranges." -- that implies that every index to every array should be size_t - I hope you don't mean that. Most of the time we don't deal with arrays where cardinality of address space + portability even matters. In these cases you'd take size_t. In every other case you take indices out of (signed) integers. Because the confusion (that comes without warning) arrising from unsuspected underflow behaviour of unsigneds is more common and worse than portability problems that may arise in the other cases.
@johannes_lalala I hope you don't mean that. There is no other type that is guarantee to be large enough to hold the largest valid array index. signed integers are bad when it comes to overflow or underflow, since they cause UB while unsigned over and underflow does not cause UB but is well defined. size_t should be used for all non-negative array indexes.
Yes, I did. You almost never need the big indices, I have never in 20 years. If you think you might hit big index, use size_t. In all other cases, don't. Consider this common range validation: index = x - y.. later: if (index < 0) -> fail, otherwise z = arr[index].. What will happen if you use unsigned integers here. That's also the c++ commitee's official stance toward this topic btw
M
Maciej Hehl

Type size_t must be big enough to store the size of any possible object. Unsigned int doesn't have to satisfy that condition.

For example in 64 bit systems int and unsigned int may be 32 bit wide, but size_t must be big enough to store numbers bigger than 4G


"object" is the language used by the standard.
I think size_t would only have to be that big if the compiler could accept a type X such that sizeof(X) would yield a value bigger than 4G. Most compilers would reject e.g. typedef unsigned char foo[1000000000000LL][1000000000000LL], and even foo[65536][65536]; could be legitimately rejected if it exceeded a documented implementation-defined limit.
@MattJoiner: The wording is fine. "Object" is not vague at all, but rather defined to mean "region of storage".
G
Graeme Burke

This excerpt from the glibc manual 0.02 may also be relevant when researching the topic:

There is a potential problem with the size_t type and versions of GCC prior to release 2.4. ANSI C requires that size_t always be an unsigned type. For compatibility with existing systems' header files, GCC defines size_t in stddef.h' to be whatever type the system'ssys/types.h' defines it to be. Most Unix systems that define size_t in `sys/types.h', define it to be a signed type. Some code in the library depends on size_t being an unsigned type, and will not work correctly if it is signed.

The GNU C library code which expects size_t to be unsigned is correct. The definition of size_t as a signed type is incorrect. We plan that in version 2.4, GCC will always define size_t as an unsigned type, and the fixincludes' script will massage the system'ssys/types.h' so as not to conflict with this.

In the meantime, we work around this problem by telling GCC explicitly to use an unsigned type for size_t when compiling the GNU C library. `configure' will automatically detect what type GCC uses for size_t arrange to override it if necessary.


S
StaceyGirl

If my compiler is set to 32 bit, size_t is nothing other than a typedef for unsigned int. If my compiler is set to 64 bit, size_t is nothing other than a typedef for unsigned long long.


Can be just defined as unsigned long for both cases on some OSes.
佚名

size_t is the size of a pointer.

So in 32 bits or the common ILP32 (integer, long, pointer) model size_t is 32 bits. and in 64 bits or the common LP64 (long, pointer) model size_t is 64 bits (integers are still 32 bits).

There are other models but these are the ones that g++ use (at least by default)


size_t is not necessarily the same size as a pointer, though it commonly is. A pointer has to be able to point to any location in memory; size_t only has to be big enough to represent the size of the largest single object.
intptr_t has probably the same size as a void * pointer. This is not a requirement, but intptr_t must be able to hold all the possible valid values for a void * pointer. But size_t does not have this requirement. Also size_t it at least 16 bit while a pointer can be smaller.