ChatGPT解决这个技术问题 Extra ChatGPT

Advantage of switch over if-else statement

What's the best practice for using a switch statement vs using an if statement for 30 unsigned enumerations where about 10 have an expected action (that presently is the same action). Performance and space need to be considered but are not critical. I've abstracted the snippet so don't hate me for the naming conventions.

switch statement:

// numError is an error enumeration type, with 0 being the non-error case
// fire_special_event() is a stub method for the shared processing

switch (numError)
{  
  case ERROR_01 :  // intentional fall-through
  case ERROR_07 :  // intentional fall-through
  case ERROR_0A :  // intentional fall-through
  case ERROR_10 :  // intentional fall-through
  case ERROR_15 :  // intentional fall-through
  case ERROR_16 :  // intentional fall-through
  case ERROR_20 :
  {
     fire_special_event();
  }
  break;

  default:
  {
    // error codes that require no additional action
  }
  break;       
}

if statement:

if ((ERROR_01 == numError)  ||
    (ERROR_07 == numError)  ||
    (ERROR_0A == numError)  || 
    (ERROR_10 == numError)  ||
    (ERROR_15 == numError)  ||
    (ERROR_16 == numError)  ||
    (ERROR_20 == numError))
{
  fire_special_event();
}
Sure you can see it from the point of which generates the most efficient code, but any modern compiler should be equally efficient. In the end, this is more a question of the colour of the bicycle shed.
I disagree, I don't think this is subjective. A simple ASM difference matters, you can't just disregard a few seconds of optimization in many cases. And in this question, it isn't a religious war or debate, there is rational explanation of why one would be faster, just read the accepted answer.
@RichardFranks offtopic: grats! you're the first human taken over moderation on SO I've ever seen

D
Drew Dormann

Use switch.

In the worst case the compiler will generate the same code as a if-else chain, so you don't lose anything. If in doubt put the most common cases first into the switch statement.

In the best case the optimizer may find a better way to generate the code. Common things a compiler does is to build a binary decision tree (saves compares and jumps in the average case) or simply build a jump-table (works without compares at all).


Technically there will still be one compare, to make sure the enum's value lies within the jump table.
Note that a series of ifs could theoretically be analyzed out to be the same as a switch by a compiler, but why take the chance? By using a switch, you are communicating exactly what you want, which does make code generation easier.
jakoben: That could be done, but only for switch-like if/else chains. In practice these don't occur because programmers use switch. I digged into compiler technology and trust me: Finding such "useless" constructs takes a lot of time. For compiler guys such an optimization does mot make sense.
@NilsPipenbrinck with the ease of building pseudo-recursive if-else chains in template meta programming, and the difficulty of generating switch case chains, that mapping may become more important. (and yes, ancient comment, but the web is forever, or at least until next tuesday)
@Yakk-AdamNevraumont: Indeed, in this case modern compilers (like GCC5 and later, clang as early as 3.0) are able to transform (ERROR_01 == numError) || (ERROR_07 == numError) || ... into the same asm as a switch, checking an immediate bitmap. (gcc.godbolt.org/z/M3G7svaPa). See my answer on this question, from only a couple years after you commented, but I only just happened to notice your comment now. :P
M
Mark Ransom

For the special case that you've provided in your example, the clearest code is probably:

if (RequiresSpecialEvent(numError))
    fire_special_event();

Obviously this just moves the problem to a different area of the code, but now you have the opportunity to reuse this test. You also have more options for how to solve it. You could use std::set, for example:

bool RequiresSpecialEvent(int numError)
{
    return specialSet.find(numError) != specialSet.end();
}

I'm not suggesting that this is the best implementation of RequiresSpecialEvent, just that it's an option. You can still use a switch or if-else chain, or a lookup table, or some bit-manipulation on the value, whatever. The more obscure your decision process becomes, the more value you'll derive from having it in an isolated function.


This is so true. The readability is so much better than both the switch and the if-statements. I was actually going to answer something like this myself, but you beat me to it. :-)
If your enum values are all small, then you don't need a hash, just a table. e.g. const std::bitset<MAXERR> specialerror(initializer); Use it with if (specialerror[numError]) { fire_special_event(); }. If you want bounds-checking, bitset::test(size_t) will throw an exception on out-of-bounds values. (bitset::operator[] doesn't range-check). cplusplus.com/reference/bitset/bitset/test. This will probably outperform a compiler-generated jump table implementing switch, esp. in the not-special case where this will be a single not-taken branch.
@PeterCordes I still argue that it's better to put the table into its own function. As I said, there are lots of options that open up when you do that, I didn't try to enumerate them all.
@MarkRansom: I didn't mean to disagree with abstracting it. Just since you gave a sample implementation using std::set, I thought I'd point out that it's probably a poor choice. It turns out that gcc already compiles the OP's code to test a bitmap in a 32bit immediate. godbolt: goo.gl/qjjv0e. gcc 5.2 will even do this for the if version. Also, more recent gcc will use the bit-test instruction bt instead of shifting to put a 1 bit in the right place and using test reg, imm32.
This immediate-constant bitmap is a big win, because there's no cache miss on the bitmap. It works if the "special" error codes are all in a range 64 or less. (or 32 for legacy 32bit code.) The compiler subtracts the smallest case value, if it's non-zero. The takeaway is that recent compilers are smart enough that you're probably going to get good code from whatever logic you use, unless you tell it to use a bulky data structure.
C
Calvin1602

The switch is faster.

Just try if/else-ing 30 different values inside a loop, and compare it to the same code using switch to see how much faster the switch is.

Now, the switch has one real problem : The switch must know at compile time the values inside each case. This means that the following code:

// WON'T COMPILE
extern const int MY_VALUE ;

void doSomething(const int p_iValue)
{
    switch(p_iValue)
    {
       case MY_VALUE : /* do something */ ; break ;
       default : /* do something else */ ; break ;
    }
}

won't compile.

Most people will then use defines (Aargh!), and others will declare and define constant variables in the same compilation unit. For example:

// WILL COMPILE
const int MY_VALUE = 25 ;

void doSomething(const int p_iValue)
{
    switch(p_iValue)
    {
       case MY_VALUE : /* do something */ ; break ;
       default : /* do something else */ ; break ;
    }
}

So, in the end, the developper must choose between "speed + clarity" vs. "code coupling".

(Not that a switch can't be written to be confusing as hell... Most the switch I currently see are of this "confusing" category"... But this is another story...)

Edit 2008-09-21: bk1e added the following comment: "Defining constants as enums in a header file is another way to handle this". Of course it is. The point of an extern type was to decouple the value from the source. Defining this value as a macro, as a simple const int declaration, or even as an enum has the side-effect of inlining the value. Thus, should the define, the enum value, or the const int value change, a recompilation would be needed. The extern declaration means the there is no need to recompile in case of value change, but in the other hand, makes it impossible to use switch. The conclusion being Using switch will increase coupling between the switch code and the variables used as cases. When it is Ok, then use switch. When it isn't, then, no surprise.

.

Edit 2013-01-15: Vlad Lazarenko commented on my answer, giving a link to his in-depth study of the assembly code generated by a switch. Very enlightning: http://lazarenko.me/switch/


Defining constants as enums in a header file is another way to handle this.
Switch is not always faster.
@Vlad Lazarenko : Thanks for the link! It was a very interesting read.
@AhmedHussein user404725's link is dead. Thankfully, I found it in the WayBack Machine: web.archive.org/web/20131111091431/http://lazarenko.me/2013/01/…. Indeed, the WayBack Machine can be quite a blessing.
A
Alexandra Franks

Compiler will optimise it anyway - go for the switch as it's the most readable.


Chances are that compiler will not touch if-then-else. In fact, gcc will not do that for sure (there is a good reason for that). Clang will optimize both cases into a binary search. For example, see this.
s
scubabbl

The Switch, if only for readability. Giant if statements are harder to maintain and harder to read in my opinion.

ERROR_01 : // intentional fall-through

or

(ERROR_01 == numError) ||

The later is more error prone and requires more typing and formatting than the first.


B
Bdoserror

Code for readability. If you want to know what performs better, use a profiler, as optimizations and compilers vary, and performance issues are rarely where people think they are.


C
Community

Compilers are really good at optimizing switch. Recent gcc is also good at optimizing a bunch of conditions in an if.

I made some test cases on godbolt.

When the case values are grouped close together, gcc, clang, and icc are all smart enough to use a bitmap to check if a value is one of the special ones.

e.g. gcc 5.2 -O3 compiles the switch to (and the if something very similar):

errhandler_switch(errtype):  # gcc 5.2 -O3
    cmpl    $32, %edi
    ja  .L5
    movabsq $4301325442, %rax   # highest set bit is bit 32 (the 33rd bit)
    btq %rdi, %rax
    jc  .L10
.L5:
    rep ret
.L10:
    jmp fire_special_event()

Notice that the bitmap is immediate data, so there's no potential data-cache miss accessing it, or a jump table.

gcc 4.9.2 -O3 compiles the switch to a bitmap, but does the 1U<<errNumber with mov/shift. It compiles the if version to series of branches.

errhandler_switch(errtype):  # gcc 4.9.2 -O3
    leal    -1(%rdi), %ecx
    cmpl    $31, %ecx    # cmpl $32, %edi  wouldn't have to wait an extra cycle for lea's output.
              # However, register read ports are limited on pre-SnB Intel
    ja  .L5
    movl    $1, %eax
    salq    %cl, %rax   # with -march=haswell, it will use BMI's shlx to avoid moving the shift count into ecx
    testl   $2150662721, %eax
    jne .L10
.L5:
    rep ret
.L10:
    jmp fire_special_event()

Note how it subtracts 1 from errNumber (with lea to combine that operation with a move). That lets it fit the bitmap into a 32bit immediate, avoiding the 64bit-immediate movabsq which takes more instruction bytes.

A shorter (in machine code) sequence would be:

    cmpl    $32, %edi
    ja  .L5
    mov     $2150662721, %eax
    dec     %edi   # movabsq and btq is fewer instructions / fewer Intel uops, but this saves several bytes
    bt     %edi, %eax
    jc  fire_special_event
.L5:
    ret

(The failure to use jc fire_special_event is omnipresent, and is a compiler bug.)

rep ret is used in branch targets, and following conditional branches, for the benefit of old AMD K8 and K10 (pre-Bulldozer): What does `rep ret` mean?. Without it, branch prediction doesn't work as well on those obsolete CPUs.

bt (bit test) with a register arg is fast. It combines the work of left-shifting a 1 by errNumber bits and doing a test, but is still 1 cycle latency and only a single Intel uop. It's slow with a memory arg because of its way-too-CISC semantics: with a memory operand for the "bit string", the address of the byte to be tested is computed based on the other arg (divided by 8), and isn't limited to the 1, 2, 4, or 8byte chunk pointed to by the memory operand.

From Agner Fog's instruction tables, a variable-count shift instruction is slower than a bt on recent Intel (2 uops instead of 1, and shift doesn't do everything else that's needed).


M
Martin Beckett

Use switch, it is what it's for and what programmers expect.

I would put the redundant case labels in though - just to make people feel comfortable, I was trying to remember when / what the rules are for leaving them out. You don't want the next programmer working on it to have to do any unnecessary thinking about language details (it might be you in a few months time!)


K
Kai Petzke

Sorry to disagree with the current accepted answer. This is the year 2021. Modern compilers and their optimizers shouldn't differentiate between switch and an equivalent if-chain anymore. If they still do, and create poorly optimized code for either variant, then write to the compiler vendor (or make it public here, which has a higher change of being respected), but don't let micro-optimizations influence your coding style.

So, if you use:

switch (numError) { case ERROR_A: case ERROR_B: ... }

or:

if(numError == ERROR_A || numError == ERROR_B || ...) { ... }

or:

template<typename C, typename EL>
bool has(const C& cont, const EL& el) {
    return std::find(cont.begin(), cont.end(), el) != cont.end();
}

constexpr std::array errList = { ERROR_A, ERROR_B, ... };
if(has(errList, rnd)) { ... }

shouldn't make a difference with respect to execution speed. But depending on what project you are working on, they might make a big difference in coding clarity and code maintainability. For example, if you have to check for a certain error list in many places of the code, the templated has() might be much easier to maintain, as the errList needs to be updated only in one place.

Talking about current compilers, I have compiled the test code quoted below with both clang++ -O3 -std=c++1z (version 10 and 11) and g++ -O3 -std=c++1z. Both clang versions gave similiar compiled code and execution times. So I am talking only about version 11 from now on. Most notably, functionA() (which uses if) and functionB() (which uses switch) produce exactly the same assembler output with clang! And functionC() uses a jump table, even though many other posters deemed jump tables to be an exclusive feature of switch. However, despite many people considering jump tables to be optimal, that was actually the slowest solution on clang: functionC() needs around 20 percent more execution time than functionA() or functionB().

The hand-optimized version functionH() was by far the fastest on clang. It even unrolled the loop partially, doing two iterations on each loop.

Actually, clang calculated the bitfield, which is explicitely supplied in functionH(), also in functionA() and functionB(). However, it used conditional branches in functionA() and functionB(), which made these slow, because branch prediction fails regularly, while it used the much more efficient adc ("add with carry") in functionH(). While it failed to apply this obvious optimization also in the other variants, is unknown to me.

The code produced by g++ looks much more complicated than that of clang - but actually runs a bit faster for functionA() and quite a lot faster for functionC(). Of the non-hand-optimized functions, functionC() is the fastest on g++ and faster than any of the functions on clang. On the contrary, functionH() requires twice the execution time when compiled with g++ instead of with clang, mostly because g++ doesn't do the loop unrolling.

Here are the detailed results:

clang:
functionA: 109877 3627
functionB: 109877 3626
functionC: 109877 4192
functionH: 109877 524

g++:
functionA: 109877 3337
functionB: 109877 4668
functionC: 109877 2890
functionH: 109877 982

The Performance changes drastically, if the constant 32 is changed to 63 in the whole code:

clang:
functionA: 106943 1435
functionB: 106943 1436
functionC: 106943 4191
functionH: 106943 524

g++:
functionA: 106943 1265
functionB: 106943 4481
functionC: 106943 2804
functionH: 106943 1038

The reason for the speedup is, that in case, that the highest tested value is 63, the compilers remove some unnecessary bound checks, because the value of rnd is bound to 63, anyways. Note that with that bound check removed, the non-optimized functionA() using simple if() on g++ performs almost as fast as the hand-optimized functionH(), and it also produces rather similiar assembler output.

What is the conclusion? If you hand-optimize and test compilers a lot, you will get the fastest solution. Any assumption whether switch or if is better, is void - they are the same on clang. And the easy to code solution to check against an array of values is actually the fastest case on g++ (if leaving out hand-optimization and by-incident matching last values of the list).

Future compiler versions will optimize your code better and better and get closer to your hand optimization. So don't waste your time on it, unless cycles are REALLY crucial in your case.

Here the test code:

#include <iostream>
#include <chrono>
#include <limits>
#include <array>
#include <algorithm>

unsigned long long functionA() {
    unsigned long long cnt = 0;

    for(unsigned long long i = 0; i < 1000000; i++) {
        unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63;
        if(rnd == 1 || rnd == 7 || rnd == 10 || rnd == 16 ||
           rnd == 21 || rnd == 22 || rnd == 63)
        {
            cnt += 1;
        }
    }

    return cnt;
}

unsigned long long functionB() {
    unsigned long long cnt = 0;

    for(unsigned long long i = 0; i < 1000000; i++) {
        unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63;
        switch(rnd) {
        case 1:
        case 7:
        case 10:
        case 16:
        case 21:
        case 22:
        case 63:
            cnt++;
            break;
        }
    }

    return cnt;
}

template<typename C, typename EL>
bool has(const C& cont, const EL& el) {
    return std::find(cont.begin(), cont.end(), el) != cont.end();
}

unsigned long long functionC() {
    unsigned long long cnt = 0;
    constexpr std::array errList { 1, 7, 10, 16, 21, 22, 63 };

    for(unsigned long long i = 0; i < 1000000; i++) {
        unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63;
        cnt += has(errList, rnd);
    }

    return cnt;
}

// Hand optimized version (manually created bitfield):
unsigned long long functionH() {
    unsigned long long cnt = 0;

    const unsigned long long bitfield =
        (1ULL << 1) +
        (1ULL << 7) +
        (1ULL << 10) +
        (1ULL << 16) +
        (1ULL << 21) +
        (1ULL << 22) +
        (1ULL << 63);

    for(unsigned long long i = 0; i < 1000000; i++) {
        unsigned char rnd = (((i * (i >> 3)) >> 8) ^ i) & 63;
        if(bitfield & (1ULL << rnd)) {
            cnt += 1;
        }
    }

    return cnt;
}

void timeit(unsigned long long (*function)(), const char* message)
{
    unsigned long long mintime = std::numeric_limits<unsigned long long>::max();
    unsigned long long fres = 0;

    for(int i = 0; i < 100; i++) {
        auto t1 = std::chrono::high_resolution_clock::now();
        fres = function();
        auto t2 = std::chrono::high_resolution_clock::now();

        auto duration = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count();
        if(duration < mintime) {
            mintime = duration;
        }
    }

    std::cout << message << fres << " " << mintime << std::endl;
}


int main(int argc, char* argv[]) {
    timeit(functionA, "functionA: ");
    timeit(functionB, "functionB: ");
    timeit(functionC, "functionC: ");
    timeit(functionH, "functionH: ");
    timeit(functionA, "functionA: ");
    timeit(functionB, "functionB: ");
    timeit(functionC, "functionC: ");
    timeit(functionH, "functionH: ");
    timeit(functionA, "functionA: ");
    timeit(functionB, "functionB: ");
    timeit(functionC, "functionC: ");
    timeit(functionH, "functionH: ");

    return 0;
}

N
Nescio

IMO this is a perfect example of what switch fall-through was made for.


in c# this is the only case where fall thought happens. Good argument right there.
S
SquareCog

They work equally well. Performance is about the same given a modern compiler.

I prefer if statements over case statements because they are more readable, and more flexible -- you can add other conditions not based on numeric equality, like " || max < min ". But for the simple case you posted here, it doesn't really matter, just do what's most readable to you.


T
TSomKes

If your cases are likely to remain grouped in the future--if more than one case corresponds to one result--the switch may prove to be easier to read and maintain.


J
Jay Bazuzi

switch is definitely preferred. It's easier to look at a switch's list of cases & know for sure what it is doing than to read the long if condition.

The duplication in the if condition is hard on the eyes. Suppose one of the == was written !=; would you notice? Or if one instance of 'numError' was written 'nmuError', which just happened to compile?

I'd generally prefer to use polymorphism instead of the switch, but without more details of the context, it's hard to say.

As for performance, your best bet is to use a profiler to measure the performance of your application in conditions that are similar to what you expect in the wild. Otherwise, you're probably optimizing in the wrong place and in the wrong way.


F
Francesca

I agree with the compacity of the switch solution but IMO you're hijacking the switch here. The purpose of the switch is to have different handling depending on the value. If you had to explain your algo in pseudo-code, you'd use an if because, semantically, that's what it is: if whatever_error do this... So unless you intend someday to change your code to have specific code for each error, I would use if.


I disagree, for the same reason that I disagree with the fall-though case. I read the switch as "In cases 01,07,0A,10,15,16 and 20 fire special event." There's no fall-though to another section., This is just an artifact of the C++ syntax where you repeat the 'case' keyword for each value.
d
da5id

I'm not sure about best-practise, but I'd use switch - and then trap intentional fall-through via 'default'


m
mbac32768

Aesthetically I tend to favor this approach.

unsigned int special_events[] = {
    ERROR_01,
    ERROR_07,
    ERROR_0A,
    ERROR_10,
    ERROR_15,
    ERROR_16,
    ERROR_20
 };
 int special_events_length = sizeof (special_events) / sizeof (unsigned int);

 void process_event(unsigned int numError) {
     for (int i = 0; i < special_events_length; i++) {
         if (numError == special_events[i]) {
             fire_special_event();
             break;
          }
     }
  }

Make the data a little smarter so we can make the logic a little dumber.

I realize it looks weird. Here's the inspiration (from how I'd do it in Python):

special_events = [
    ERROR_01,
    ERROR_07,
    ERROR_0A,
    ERROR_10,
    ERROR_15,
    ERROR_16,
    ERROR_20,
    ]
def process_event(numError):
    if numError in special_events:
         fire_special_event()

A language's syntax does have an effect on how we implement a solution... => It looks ugly in C and nice in Python. :)
Use bitmaps? If error_0a is 0x0a etc you could put them as bits in a long long. long long special_events=1LL<<1 | 1LL<<7 | 1LL<<0xa ... Then use if (special_events & (1LL<
Yuck. You've turned an O(1) worst-case operation (if jump tables are generated) into O(N) worst-case (where N is the number of cases handled), and you used a break outside a case (yes, a minor sin, but a sin nonetheless). :)
Yuck? He said performance and space are not critical. I was simply proposing another way of looking at the problem. If we can represent a problem in a way where humans get to think less, then I usually don't care if it means computers have to think more.
J
Jérôme Verstrynge
while (true) != while (loop)

Probably the first one is optimised by the compiler, that would explain why the second loop is slower when increasing loop count.


This appears to be a comment to McAnix's answer. That's only one of the problems with that attempt at timing if vs. switch as a loop-end condition in Java.
W
William Keller

I would pick the if statement for the sake of clarity and convention, although I'm sure that some would disagree. After all, you are wanting to do something if some condition is true! Having a switch with one action seems a little... unneccesary.


E
Ed Brown

Im not the person to tell you about speed and memory usage, but looking at a switch statment is a hell of a lot easier to understand then a large if statement (especially 2-3 months down the line)


l
lewis

I would say use SWITCH. This way you only have to implement differing outcomes. Your ten identical cases can use the default. Should one change all you need to is explicitly implement the change, no need to edit the default. It's also far easier to add or remove cases from a SWITCH than to edit IF and ELSEIF.

switch(numerror){
    ERROR_20 : { fire_special_event(); } break;
    default : { null; } break;
}

Maybe even test your condition (in this case numerror) against a list of possibilities, an array perhaps so your SWITCH isn't even used unless there definately will be an outcome.


There are about 30 errors total. 10 require the special action, so I am using the default for the ~20 errors that do not require an action...
G
Greg Whitfield

Seeing as you only have 30 error codes, code up your own jump table, then you make all optimisation choices yourself (jump will always be quickest), rather than hope the compiler will do the right thing. It also makes the code very small (apart from the static declaration of the jump table). It also has the side benefit that with a debugger you can modify the behaviour at runtime should you so need, just by poking the table data directly.


Wow, that seems like a way to turn a simple problem into a complex one. Why go to all that trouble when the compiler will do a great job for you. Plus it's apparently an error handler, so it's not likely to be so speed critical. A switch is by far the easiest thing to read and maintain.
A table is hardly complex - in fact it's probably simpler than a switch to code. And the statement did mention performance was a factor.
That sounds like premature optimization. As long as you keep your enum values small and contiguous, the compiler should do it for you. Putting the switch in a separate function keeps the code that uses it nice and small, like Mark Ransom suggests in his answer, gives the same small-code benefit.
Also, if you're going to implement anything yourself, make a std::bitset<MAXERR> specialerror;, then if (specialerror[err]) { special_handler(); }. This will be faster than a jump table, esp. in the not-taken case.
M
McAnix

I know its old but

public class SwitchTest {
static final int max = 100000;

public static void main(String[] args) {

int counter1 = 0;
long start1 = 0l;
long total1 = 0l;

int counter2 = 0;
long start2 = 0l;
long total2 = 0l;
boolean loop = true;

start1 = System.currentTimeMillis();
while (true) {
  if (counter1 == max) {
    break;
  } else {
    counter1++;
  }
}
total1 = System.currentTimeMillis() - start1;

start2 = System.currentTimeMillis();
while (loop) {
  switch (counter2) {
    case max:
      loop = false;
      break;
    default:
      counter2++;
  }
}
total2 = System.currentTimeMillis() - start2;

System.out.println("While if/else: " + total1 + "ms");
System.out.println("Switch: " + total2 + "ms");
System.out.println("Max Loops: " + max);

System.exit(0);
}
}

Varying the loop count changes a lot:

While if/else: 5ms Switch: 1ms Max Loops: 100000

While if/else: 5ms Switch: 3ms Max Loops: 1000000

While if/else: 5ms Switch: 14ms Max Loops: 10000000

While if/else: 5ms Switch: 149ms Max Loops: 100000000

(add more statements if you want)


Good point, but sry, dude, you're in the wrong language. Varying the language changes a lot ;)
The if(max) break loop runs in constant time regardless of loop count? Sounds like the JIT-compiler is smart enough to optimize the loop away to counter2=max. And maybe it's slower than switch if the first call to currentTimeMillis has more overhead, because not everything is JIT-compiled yet? Putting the loops in the other order would probably give different results.
J
Jordan Effinger

When it comes to compiling the program, I don't know if there is any difference. But as for the program itself and keeping the code as simple as possible, I personally think it depends on what you want to do. if else if else statements have their advantages, which I think are:

allow you to test a variable against specific ranges you can use functions (Standard Library or Personal) as conditionals.

(example:

`int a;
 cout<<"enter value:\n";
 cin>>a;

 if( a > 0 && a < 5)
   {
     cout<<"a is between 0, 5\n";

   }else if(a > 5 && a < 10)

     cout<<"a is between 5,10\n";

   }else{

       "a is not an integer, or is not in range 0,10\n";

However, If else if else statements can get complicated and messy (despite your best attempts) in a hurry. Switch statements tend to be clearer, cleaner, and easier to read; but can only be used to test against specific values (example:

`int a;
 cout<<"enter value:\n";
 cin>>a;

 switch(a)
 {
    case 0:
    case 1:
    case 2: 
    case 3:
    case 4:
    case 5:
        cout<<"a is between 0,5 and equals: "<<a<<"\n";
        break;
    //other case statements
    default:
        cout<<"a is not between the range or is not a good value\n"
        break;

I prefer if - else if - else statements, but it really is up to you. If you want to use functions as the conditions, or you want to test something against a range, array, or vector and/or you don't mind dealing with the complicated nesting, I would recommend using If else if else blocks. If you want to test against single values or you want a clean and easy to read block, I would recommend you use switch() case blocks.