ChatGPT解决这个技术问题 Extra ChatGPT

Why does the Swift language guide suggest using Int "even when values are known to be non-negative"?

This is a question about programming style in Swift, specifically Int vs UInt.

The Swift Programming Language Guide advises programmers to use the generic signed integer type Int even when variables are known to be non-negative. From the guide:

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this is not the case, Int is preferred, even when the values to be stored are known to be non-negative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.

However, UInt will be 32-bit unsigned on 32-bit architectures and 64-bit unsigned on 64-bit architectures so there is no performance benefit to using Int over UInt.

By contrast, the Swift guide gives a later example:

let age = -3 assert(age >= 0, "A person's age cannot be less than zero") // this causes the assertion to trigger, because age is not >= 0

Here, a runtime issue could be caught at compile time if the code had been written as:

let age:UInt = -3  
// this causes a compiler error because -3 is negative

There are many other cases (for example anything that will index a collection) where using a UInt would catch issues at compile time rather than runtime.

So the question: is the advice in the Swift Programming Language guide sound, and do the benefits of using Int "even when the values to be stored are known to be non-negative" outweigh the safety advantages of using UInt?

Additional note: Having used Swift for a couple of weeks now its clear that for interoperability with Cocoa UInt is required. For example the AVFoundation framework uses unsigned integers anywhere a "count" is required (number of samples / frames / channels etc). Converting these values to Int could lead to serious bugs where values are greater than Int.max

Surely it depends on what you consider important. I would tend to agree with the documentation, that it just makes the code simpler to work with int
@JackJames I agree that it's a matter of the priorities of the individual developer and the code being worked on. I'm not seeing a compelling case one way or the other, which makes me think that Apple shouldn't be advising people not to use UInt when rigidly sticking with Int could cause code to break in all sorts of nasty ways.
The traditional reason for discouraging unsigned integers in C is that a down-counting for loop can easily go wrong; e.g. for (unsigned a = 10; a > 0; --a) is wrong because a is always > 0 by definition.
Agree with Jack and Jamie. Using UInt for known non-negative values conveys intent and semantic information to the next developer who reads the code. If you prioritise interoperability above this, then you might choose Int instead.
@alastair unsigned a isn't always > 0 it's >= 0

N
Nick Shelley

I don't think using UInt is as safe as you think it is. As you noted:

let age:UInt = -3

results in a compiler error. I also tried:

let myAge:Int = 1
let age:UInt = UInt(myAge) - 3

which also resulted in a compiler error. However the following (in my opinion much more common in real programs) scenarios had no compiler error, but actually resulted in runtime errors of EXC_BAD_INSTRUCTION:

func sub10(num: Int) -> UInt {
    return UInt(num - 10) //Runtime error when num < 10
}
sub10(4)

as well as:

class A {
    var aboveZero:UInt
    init() { aboveZero = 1 }
}
let a = A()
a.aboveZero = a.aboveZero - 10 //Runtime error

Had these been plain Ints, instead of crashing, you could add code to check your conditions:

if a.aboveZero > 0 {
    //Do your thing
} else {
    //Handle bad data
}

I might even go so far as to equate their advice against using UInts to their advice against using implicitly unwrapped optionals: Don't do it unless you are certain you won't get any negatives, because otherwise you'll get runtime errors (except in the simplest of cases).


You could have added those checks for the UInt case just as easily; you’d just need to check that num and a.aboveZero were >= 10.
Also, you can use num &- 10 to disable the run-time overflow check.
B
BTRUE

It says in your question.. "A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference."

This avoids issues such as assigning an Int to an UInt. Negative Int values assigned to UInts result in large values instead of the intended negative value. The binary representation of both doesn't differentiate one type from the other.

Also, both are classes, one not descended from the other. Classes built receive Ints cannot receive UInts without overloading, meaning converting between the two would be a common task of UInts are being used when most of the framework receives Ints. Converting between the two can become a non-trival task as well.

The two previous paragraphs speak to "interoperability" and "converting between different number types". Issues that are avoided if UInts aren't used.


Right, so this is really about interoperability with the Swift framework. I still think in general it's safer to use unsigned integers for known non-negative values, but I agree there's no sane conversion between the two types for all possible values of each type. So if interoperability is favoured over safety then I understand the design choice of using Int where possible.