ChatGPT解决这个技术问题 Extra ChatGPT

max value of integer

In C, the integer (for 32 bit machine) is 32 bits, and it ranges from -32,768 to +32,767. In Java, the integer(long) is also 32 bits, but ranges from -2,147,483,648 to +2,147,483,647.

I do not understand how the range is different in Java, even though the number of bits is the same. Can someone explain this?

To get the max and min values of int in Java, use Integer.MAX_VALUE and Integer.MIN_VALUE
@stackuser - Some good answers to your question - you should accept one :)
@DarraghEnright he was last seen in March 2015, i doubt he's coming back :(
@Adrian haha - I guess not! Happens a bit I suppose. I always imagined that SO could easily auto-accept answers under certain conditions - where the question is over a certain age, the OP is AWOL and there's a clearly useful answer with a high number of upvotes.
@DarraghEnright Agree. But OP was here ~2 weeks ago, he had the chance to accept, so technically he's not away.

g
gaborsch

In C, the language itself does not determine the representation of certain datatypes. It can vary from machine to machine, on embedded systems the int can be 16 bit wide, though usually it is 32 bit.

The only requirement is that short int <= int <= long int by size. Also, there is a recommendation that int should represent the native capacity of the processor.

All types are signed. The unsigned modifier allows you to use the highest bit as part of the value (otherwise it is reserved for the sign bit).

Here's a short table of the possible values for the possible data types:

          width                     minimum                         maximum
signed    8 bit                        -128                            +127
signed   16 bit                     -32 768                         +32 767
signed   32 bit              -2 147 483 648                  +2 147 483 647
signed   64 bit  -9 223 372 036 854 775 808      +9 223 372 036 854 775 807
unsigned  8 bit                           0                            +255
unsigned 16 bit                           0                         +65 535
unsigned 32 bit                           0                  +4 294 967 295
unsigned 64 bit                           0     +18 446 744 073 709 551 615

In Java, the Java Language Specification determines the representation of the data types.

The order is: byte 8 bits, short 16 bits, int 32 bits, long 64 bits. All of these types are signed, there are no unsigned versions. However, bit manipulations treat the numbers as they were unsigned (that is, handling all bits correctly).

The character data type char is 16 bits wide, unsigned, and holds characters using UTF-16 encoding (however, it is possible to assign a char an arbitrary unsigned 16 bit integer that represents an invalid character codepoint)

          width                     minimum                         maximum

SIGNED
byte:     8 bit                        -128                            +127
short:   16 bit                     -32 768                         +32 767
int:     32 bit              -2 147 483 648                  +2 147 483 647
long:    64 bit  -9 223 372 036 854 775 808      +9 223 372 036 854 775 807

UNSIGNED
char     16 bit                           0                         +65 535

The C standard also specifies minimum values for INT_MAX, LONG_MAX, etc.
Java 8 now has unsigned Integer as well: docs.oracle.com/javase/8/docs/api/java/lang/Integer.html
Thanks, @jkbkot, good to know that. Although it seems that the representation is still signed, but certain unsigned operations are implemented as a function. It's hard to add two unsigned ints...
@GaborSch In Java, int foo = Integer.MAX_VALUE + 1; System.out.println(Integer.toUnsignedLong(foo)); prints 2147483648 and char is an unsigned type
@howlger Integer.MAX_VALUE + 1 is 0x80000000 in hex, because of the overflow (and equals to Integer.MIN_VALUE). If you convert it to unsigned (long), the sign bit will be treated like a value bit, so it will be 2147483648. Thank you for the char note. char is unsigned, you're right, but char is not really used for calculations, that's why I left it from the list.
u
unwind

In C, the integer(for 32 bit machine) is 32 bit and it ranges from -32768 to +32767.

Wrong. 32-bit signed integer in 2's complement representation has the range -231 to 231-1 which is equal to -2,147,483,648 to 2,147,483,647.


I
Ivaylo Strandjev

A 32 bit integer ranges from -2,147,483,648 to 2,147,483,647. However the fact that you are on a 32-bit machine does not mean your C compiler uses 32-bit integers.


At least my copy of Mr. Kernighan and Mr. Ritchies "The C programming language" says in A4.2 that int is of the "natural width of the machine" which I'd interpret as 32 bits when compiling for 32 bit machines.
This depends on the compiler, not the machine I believe. I had a 16 bit compiler installed on my 64 bit machine for instance.
Of course your 16 bit compiler for 16 bit x86 code did only use 16 bits. But that was not my point. Even a 32 bit x86 processor running in 16 bit mode has only a native capacity is only of 16 bits. My point is that the target platform the compiler has matters. E.g. if you have a compiler for your 80286 you will still generate 16-bit code and hence have 16 bit integers.
@junix I believe that is exactly what I point out in my answer. It is not the OS that specifies how many bits do your integers have. Target platform is a property of the compiler, not of the OS it is working on or the processor you have.
As I wrote in my first comment. "It's 32-bits when compiling for 32 bit machines". The OP writes in his posting "the integer(for 32 bit machine)" So from what I understand he is not referring to his OS, or his machine, he is referring to his target platform
J
John Bode

The C language definition specifies minimum ranges for various data types. For int, this minimum range is -32767 to 32767, meaning an int must be at least 16 bits wide. An implementation is free to provide a wider int type with a correspondingly wider range. For example, on the SLES 10 development server I work on, the range is -2147483647 to 2137483647.

There are still some systems out there that use 16-bit int types (All The World Is Not A VAX x86), but there are plenty that use 32-bit int types, and maybe a few that use 64-bit.

The C language was designed to run on different architectures. Java was designed to run in a virtual machine that hides those architectural differences.


For 16-bit int, it is -32768 to 32767. For 32-bit int, it is -2147483648 to 2147483647. Range is specified from -2^(n bits-1) to +2^(n bits-1) - 1.
@Maven: 5.2.4.2.1 - INT_MIN is specified as -32767. Don't assume two's complement.
U
UmNyobe

The strict equivalent of the java int is long int in C.

Edit: If int32_t is defined, then it is the equivalent in terms of precision. long int guarantee the precision of the java int, because it is guarantee to be at least 32 bits in size.


you are right, the equivalent is int32_t if it is defined by your compiler
B
Brill Pappin

The poster has their java types mixed up. in java, his C in is a short: short (16 bit) = -32768 to 32767 int (32 bit) = -2,147,483,648 to 2,147,483,647

http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html


B
BlueLettuce16

That's because in C - integer on 32 bit machine doesn't mean that 32 bits are used for storing it, it may be 16 bits as well. It depends on the machine (implementation-dependent).


Well, it's worth noting that the typical implementation behavior is using "machine width" for int. But limits.h helps out to find out what's the exact truth
But in reality, I don't think a C compiler for 32 has ever been made without int as 32 bits. The standard may allow the compiler implementation of int to be of a moronic nature, but for some reason, nobody wants to make a moronic C compiler. The trend is to make useful C compilers.
A
Alex

Actually the size in bits of the int, short, long depends on the compiler implementation.

E.g. on my Ubuntu 64 bit I have short in 32 bits, when on another one 32bit Ubuntu version it is 16 bit.


E
Emos Turi

It is actually really simple to understand, you can even compute it with the google calculator: you have 32 bits for an int and computers are binary, therefore you can have 2 values per bit (spot). if you compute 2^32 you will get the 4,294,967,296. so if you divide this number by 2, (because half of them are negative integers and the other half are positive), then you get 2,147,483,648. and this number is the biggest int that can be represented by 32 bits, although if you pay attention you will notice that 2,147,483,648 is greater than 2,147,483,647 by 1, this is because one of the numbers represents 0 which is right in the middle unfortunately 2^32 is not an odd number therefore you dont have only one number in the middle, so the possitive integers have one less cipher while the negatives get the complete half 2,147,483,648.

And thats it. It depends on the machine not on the language.


This is not what he asked for... the question is "why C int is different from Java int?"
And in Java, the size of int does not depend on the machine. int == 32-bit signed, two's-complement is defined by the Java language specification, and engraved on sheets of anodized unobtainium. (OK, maybe not the last bit.)
A
Achintya Jha

In C range for __int32 is –2147483648 to 2147483647. See here for full ranges.

unsigned short 0 to 65535
signed short –32768 to 32767
unsigned long 0 to 4294967295
signed long –2147483648 to 2147483647

There are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types'.

In Java

The int data type is a 32-bit signed two's complement integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive).


The values you quote for C are only minimum ranges.
@OliCharlesworth Range if from minimum to maximum.
What I mean is, the range for each type is allowed to be larger than what you've quoted above.
There is nothing in C called __int32. Microsoft has no strictly conforming C compiler, so who cares about how their non-C compiler works? The only relevant source is ISO9899, either 5.2.4.2.1 "Sizes of integer types" or 7.20.2.1 "Limits of exact-width integer types". None of which is compatible with the Microsoft goo.
C99 does add int32_t, int16_t, etc., to the standard. Not 100% compatible with Microsoft's additions, but they work in similar ways.
C
Carlos UA

in standard C, you can use INT_MAX as the maximum 'int' value, this constant must be defined in "limits.h". Similar constants are defined for other types (http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.5.html), as stated, these constant are implementation-dependent but have a minimum value according to the minimum bits for each type, as specified in the standard.


This doesn't really get around to addressing the OP's question. Also, core parts of an answer really shouldn't be buried on another site.