ChatGPT解决这个技术问题 Extra ChatGPT

Easily measure elapsed time

I am trying to use time() to measure various points of my program.

What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take.

printf("**MyProgram::before time= %ld\n", time(NULL));

doSomthing();
doSomthingLong();

printf("**MyProgram::after time= %ld\n", time(NULL));

I have tried:

struct timeval diff, startTV, endTV;

gettimeofday(&startTV, NULL); 

doSomething();
doSomethingLong();

gettimeofday(&endTV, NULL); 

timersub(&endTV, &startTV, &diff);

printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec);

How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?

What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?

I don't understand the question. Of course the values are different. Time passed in between, so time() returns a different value.
What do you mean "I don't understand is why the values in the before and after are Different"? You're getting the current time (in seconds since Jan 1, 1970) using time(NULL) ... the second time you call it will be N seconds after the first and thus ... different (unless whatever it is you're doing doesn't take a second to complete ... in which case, it'll be the same as the first).
Can you tell us what it prints, and how long it takes if you time it with a stopwatch or a wall clock (or a calendar)?
Sorry, I mean both values are the SAME. I mis-type my question.

m
majkel.mk
//***C++11 Style:***
#include <chrono>

std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();

std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count() << "[µs]" << std::endl;
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds> (end - begin).count() << "[ns]" << std::endl;

To run this you have to add the #include <chrono> directive and I would change the reporting time as: std::cout << "Time difference (sec) = " << (std::chrono::duration_cast<std::chrono::microseconds>(end - begin).count()) /1000000.0 <<std::endl; (and do not forget the C++11 flag when compiling: -std=c++11 )
@RestlessC0bra According to the docs on cppreference, "This clock is not related to wall clock time (for example, it can be time since last reboot), and is most suitable for measuring intervals."
@Antonello - You should look more closely at std::chrono::duration_cast, and in particular at std::chrono::seconds if you want the duration to be returned in terms of seconds.
@AxelRietschin That would only return the time in seconds. Antonello's suggestion also provides fractional time (i.e. 2.15315 seconds as opposed to just 2 seconds).
What data type is this?std::chrono::duration_cast(end - begin).count()
N
Nikos Athanasiou

0 - Delta

Use a delta function to compute time differences:

auto start = std::chrono::steady_clock::now();
std::cout << "Elapsed(ms)=" << since(start).count() << std::endl;

since accepts any timepoint and produces any duration (milliseconds is the default). It is defined as:

template <
    class result_t   = std::chrono::milliseconds,
    class clock_t    = std::chrono::steady_clock,
    class duration_t = std::chrono::milliseconds
>
auto since(std::chrono::time_point<clock_t, duration_t> const& start)
{
    return std::chrono::duration_cast<result_t>(clock_t::now() - start);
}

Demo

1 - Timer

Use a timer based on std::chrono:

Timer clock; // Timer<milliseconds, steady_clock>

clock.tick();
/* code you want to measure */
clock.tock();

cout << "Run time = " << clock.duration().count() << " ms\n";

Demo

Timer is defined as:

template <class DT = std::chrono::milliseconds,
          class ClockT = std::chrono::steady_clock>
class Timer
{
    using timep_t = typename ClockT::time_point;
    timep_t _start = ClockT::now(), _end = {};

public:
    void tick() { 
        _end = timep_t{}; 
        _start = ClockT::now(); 
    }
    
    void tock() { _end = ClockT::now(); }
    
    template <class T = DT> 
    auto duration() const { 
        gsl_Expects(_end != timep_t{} && "toc before reporting"); 
        return std::chrono::duration_cast<T>(_end - _start); 
    }
};

As Howard Hinnant pointed out, we use a duration to remain in the chrono type-system and perform operations like averaging or comparisons (e.g. here this means using std::chrono::milliseconds). When we just do IO, we use the count() or ticks of a duration (e.g. here number of milliseconds).

2 - Instrumentation

Any callable (function, function object, lambda etc.) can be instrumented for benchmarking. Say you have a function F invokable with arguments arg1,arg2, this technique results in:

cout << "F runtime=" << measure<>::duration(F, arg1, arg2).count() << "ms";

Demo

measure is defined as:

template <class TimeT  = std::chrono::milliseconds
          class ClockT = std::chrono::steady_clock>
struct measure
{
    template<class F, class ...Args>
    static auto duration(F&& func, Args&&... args)
    {
        auto start = ClockT::now();
        std::invoke(std::forward<F>(func), std::forward<Args>(args)...);
        return std::chrono::duration_cast<TimeT>(ClockT::now()-start);
    }
};

As mentioned in (1), using the duration w/o .count() is most useful for clients that want to post-process a bunch of durations prior to I/O, e.g. average:

auto avg = (measure<>::duration(func) + measure<>::duration(func)) / 2;
std::cout << "Average run time " << avg.count() << " ms\n";

+This is why the forwarded function call.

+The complete code can be found here

+My attempt to build a benchmarking framework based on chrono is recorded here

+Old demo


Nice; I have something similar in my code, but use a different interface to the class: I have a class (code_timer) that takes the start time (std::chrono::system_clock::now();) in the constructor, a method code_timer::ellapsed that measures the difference between a new now() call and the one in the constructor, and a code_timer::reset method that resets the start time to a new now() result. To measure the execution of a functor in my code, I use a free function, outside the class. This allows for measuring time from the construction of an object, to the finish of an asynch call.
<nitpick>: Don't escape out of the chrono system until you have to (avoid use of .count()). Let the client call .count() when forced to (say for I/O, which is indeed unfortunate). The client may want to post-process a bunch of durations prior to I/O (e.g. average) and that is best done within the chrono system.
@user3241228 1. VS2013 does not support auto return types (just trailing return types - it's a c++14 feature not available yet). 2. I believe this is the reason but I asked a q just to be sure
Why not std::forward<F>(func)?
@oliora It's the same thing. I prefer std::forward<decltype(func)>(func) because it can apply to arguments of generic lambdas (auto&& func) where F is not syntactically there and it's easy to abstract in a utility macro #define fw(arg) std::forward<decltype(arg)>(arg) which I do in my benchmark library (so it's a syntactic left over on which I don't elaborate much in the answer)
佚名
#include <ctime>

void f() {
  using namespace std;
  clock_t begin = clock();

  code_to_time();

  clock_t end = clock();
  double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC;
}

The time() function is only accurate to within a second, but there are CLOCKS_PER_SEC "clocks" within a second. This is an easy, portable measurement, even though it's over-simplified.


Be aware that clock() measures CPU time, not actual time elapsed (which may be much greater).
When programming parallel code for clusters, this method doesn't reflect the real-world time...
This seems the easiest of the ways. Would you care to update or address the comment made be @jlstrecker ?
The solution posted above is not a good solution for many reasons. This is the correct answer - stackoverflow.com/questions/2962785/…
I tried this solution, and as the comments suggested, my timer ran much faster then the real world time.
b
bakalolo

As I can see from your question, it looks like you want to know the elapsed time after execution of some piece of code. I guess you would be comfortable to see the results in second(s). If so, try using difftime() function as shown below. Hope this solves your problem.

#include <time.h>
#include <stdio.h>

time_t start,end;
time (&start);
.
.
.
<your code>
.
.
.
time (&end);
double dif = difftime (end,start);
printf ("Elasped time is %.2lf seconds.", dif );

This always gives me integer seconds. Is that supposed to happen?
time will always only return seconds, so it can't be used for sub second measurements.
R
RvdK

Windows only: (The Linux tag was added after I posted this answer)

You can use GetTickCount() to get the number of milliseconds that have elapsed since the system was started.

long int before = GetTickCount();

// Perform time-consuming operation

long int after = GetTickCount();

I am using it on linux. So I can't use the GetTickCount() function.
already never mind ;) Thanks for updating the tag of your post
It works and gives the real time, not the CPU time. I tested it by placing SleepEx(5000,0) in place of //Perform time-consuming operation and difference of after and before was almost 5 sec.
In SDL there is a portable function doing the same thing: SDL_GetTicks(). It measures time since SDL initialization and not system startup.
佚名
struct profiler
{
    std::string name;
    std::chrono::high_resolution_clock::time_point p;
    profiler(std::string const &n) :
        name(n), p(std::chrono::high_resolution_clock::now()) { }
    ~profiler()
    {
        using dura = std::chrono::duration<double>;
        auto d = std::chrono::high_resolution_clock::now() - p;
        std::cout << name << ": "
            << std::chrono::duration_cast<dura>(d).count()
            << std::endl;
    }
};

#define PROFILE_BLOCK(pbn) profiler _pfinstance(pbn)

Usage is below ::

{
    PROFILE_BLOCK("Some time");
    // your code or function
}

THis is similar to RAII in scope

NOTE this is not mine, but i thought it was relevant here


includes missing
p
philant

time(NULL) returns the number of seconds elapsed since 01/01/1970 at 00:00 (the Epoch). So the difference between the two values is the number of seconds your processing took.

int t0 = time(NULL);
doSomthing();
doSomthingLong();
int t1 = time(NULL);

printf ("time = %d secs\n", t1 - t0);

You can get finer results with getttimeofday(), which return the current time in seconds, as time() does and also in microseconds.


Seems the smartest way and it is very similar to Java. Btw, shouldn't this be stored in a long type?
v
vodkhang

the time(NULL) function will return the number of seconds elapsed since 01/01/1970 at 00:00. And because, that function is called at different time in your program, it will always be different Time in C++


I don't know why someone downvoted, but your answer isn't entirely correct. For starters it doesn't return the date time, and it won't always be different.
E
Eskay
#include<time.h> // for clock
#include<math.h> // for fmod
#include<cstdlib> //for system
#include <stdio.h> //for delay

using namespace std;

int main()
{


   clock_t t1,t2;

   t1=clock(); // first time capture

   // Now your time spanning loop or code goes here
   // i am first trying to display time elapsed every time loop runs

   int ddays=0; // d prefix is just to say that this variable will be used for display
   int dhh=0;
   int dmm=0;
   int dss=0;

   int loopcount = 1000 ; // just for demo your loop will be different of course

   for(float count=1;count<loopcount;count++)
   {

     t2=clock(); // we get the time now

     float difference= (((float)t2)-((float)t1)); // gives the time elapsed since t1 in milliseconds

    // now get the time elapsed in seconds

    float seconds = difference/1000; // float value of seconds
    if (seconds<(60*60*24)) // a day is not over
    {
        dss = fmod(seconds,60); // the remainder is seconds to be displayed
        float minutes= seconds/60;  // the total minutes in float
        dmm= fmod(minutes,60);  // the remainder are minutes to be displayed
        float hours= minutes/60; // the total hours in float
        dhh= hours;  // the hours to be displayed
        ddays=0;
    }
    else // we have reached the counting of days
    {
        float days = seconds/(24*60*60);
        ddays = (int)(days);
        float minutes= seconds/60;  // the total minutes in float
        dmm= fmod(minutes,60);  // the rmainder are minutes to be displayed
        float hours= minutes/60; // the total hours in float
        dhh= fmod (hours,24);  // the hours to be displayed

    }

    cout<<"Count Is : "<<count<<"Time Elapsed : "<<ddays<<" Days "<<dhh<<" hrs "<<dmm<<" mins "<<dss<<" secs";


    // the actual working code here,I have just put a delay function
    delay(1000);
    system("cls");

 } // end for loop

}// end of main 

Whilst your answer is appreciated, we do prefer a pre-amble containing a brief description of the code. Thanks.
This is not the elapsed time, but the processor time.
B
Bryan

The values printed by your second program are seconds, and microseconds.

0 26339 = 0.026'339 s =   26339 µs
4 45025 = 4.045'025 s = 4045025 µs

A
Akanksha Gupta
#include <ctime>
#include <cstdio>
#include <iostream>
#include <chrono>
#include <sys/time.h>
using namespace std;
using namespace std::chrono;

void f1()
{
  high_resolution_clock::time_point t1 = high_resolution_clock::now();
  high_resolution_clock::time_point t2 = high_resolution_clock::now();
  double dif = duration_cast<nanoseconds>( t2 - t1 ).count();
  printf ("Elasped time is %lf nanoseconds.\n", dif );
}

void f2()
{
  timespec ts1,ts2;
  clock_gettime(CLOCK_REALTIME, &ts1);
  clock_gettime(CLOCK_REALTIME, &ts2);
  double dif = double( ts2.tv_nsec - ts1.tv_nsec );
  printf ("Elasped time is %lf nanoseconds.\n", dif );
}

void f3()
{
  struct timeval t1,t0;
  gettimeofday(&t0, 0);
  gettimeofday(&t1, 0);
  double dif = double( (t1.tv_usec-t0.tv_usec)*1000);
  printf ("Elasped time is %lf nanoseconds.\n", dif );
}
void f4()
{
  high_resolution_clock::time_point t1 , t2;
  double diff = 0;
  t1 = high_resolution_clock::now() ;
  for(int i = 1; i <= 10 ; i++)
  {
    t2 = high_resolution_clock::now() ;
    diff+= duration_cast<nanoseconds>( t2 - t1 ).count();
    t1 = t2;
  }
  printf ("high_resolution_clock:: Elasped time is %lf nanoseconds.\n", diff/10 );
}

void f5()
{
  timespec ts1,ts2;
  double diff = 0;
  clock_gettime(CLOCK_REALTIME, &ts1);
  for(int i = 1; i <= 10 ; i++)
  {
    clock_gettime(CLOCK_REALTIME, &ts2);
    diff+= double( ts2.tv_nsec - ts1.tv_nsec );
    ts1 = ts2;
  }
  printf ("clock_gettime:: Elasped time is %lf nanoseconds.\n", diff/10 );
}

void f6()
{
  struct timeval t1,t2;
  double diff = 0;
  gettimeofday(&t1, 0);
  for(int i = 1; i <= 10 ; i++)
  {
    gettimeofday(&t2, 0);
    diff+= double( (t2.tv_usec-t1.tv_usec)*1000);
    t1 = t2;
  }
  printf ("gettimeofday:: Elasped time is %lf nanoseconds.\n", diff/10 );
}

int main()
{
  //  f1();
  //  f2();
  //  f3();
  f6();
  f4();
  f5();
  return 0;
}

w
wilhelmtell

The time(NULL) function call will return the number of seconds elapsed since epoc: January 1 1970. Perhaps what you mean to do is take the difference between two timestamps:

size_t start = time(NULL);
doSomthing();
doSomthingLong();

printf ("**MyProgram::time elapsed= %lds\n", time(NULL) - start);

c
cloudrain21

On linux, clock_gettime() is one of the good choices. You must link real time library(-lrt).

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <time.h>

#define BILLION  1000000000L;

int main( int argc, char **argv )
  {
    struct timespec start, stop;
    double accum;

    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
      perror( "clock gettime" );
      exit( EXIT_FAILURE );
    }

    system( argv[1] );

    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
      perror( "clock gettime" );
      exit( EXIT_FAILURE );
    }

    accum = ( stop.tv_sec - start.tv_sec )
          + ( stop.tv_nsec - start.tv_nsec )
            / BILLION;
    printf( "%lf\n", accum );
    return( EXIT_SUCCESS );
  }

r
rwp

As others have already noted, the time() function in the C standard library does not have a resolution better than one second. The only fully portable C function that may provide better resolution appears to be clock(), but that measures processor time rather than wallclock time. If one is content to limit oneself to POSIX platforms (e.g. Linux), then the clock_gettime() function is a good choice.

Since C++11, there are much better timing facilities available that offer better resolution in a form that should be very portable across different compilers and operating systems. Similarly, the boost::datetime library provides good high-resolution timing classes that should be highly portable.

One challenge in using any of these facilities is the time-delay introduced by querying the system clock. From experimenting with clock_gettime(), boost::datetime and std::chrono, this delay can easily be a matter of microseconds. So, when measuring the duration of any part of your code, you need to allow for there being a measurement error of around this size, or try to correct for that zero-error in some way. Ideally, you may well want to gather multiple measurements of the time taken by your function, and compute the average, or maximum/minimum time taken across many runs.

To help with all these portability and statistics-gathering issues, I've been developing the cxx-rtimers library available on Github which tries to provide a simple API for timing blocks of C++ code, computing zero errors, and reporting stats from multiple timers embedded in your code. If you have a C++11 compiler, you simply #include <rtimers/cxx11.hpp>, and use something like:

void expensiveFunction() {
    static rtimers::cxx11::DefaultTimer timer("expensiveFunc");
    auto scopedStartStop = timer.scopedStart();
    // Do something costly...
}

On program exit, you'll get a summary of timing stats written to std::cerr such as:

Timer(expensiveFunc): <t> = 6.65289us, std = 3.91685us, 3.842us <= t <= 63.257us (n=731)

which shows the mean time, its standard-deviation, the upper and lower limits, and the number of times this function was called.

If you want to use Linux-specific timing functions, you can #include <rtimers/posix.hpp>, or if you have the Boost libraries but an older C++ compiler, you can #include <rtimers/boost.hpp>. There are also versions of these timer classes that can gather statistical timing information from across multiple threads. There are also methods that allow you to estimate the zero-error associated with two immediately consecutive queries of the system clock.


G
Gluttton

C++ std::chrono has a clear benefit of being cross-platform. However, it also introduces a significant overhead compared to POSIX clock_gettime(). On my Linux box all std::chrono::xxx_clock::now() flavors perform roughly the same:

std::chrono::system_clock::now()
std::chrono::steady_clock::now()
std::chrono::high_resolution_clock::now()

Though POSIX clock_gettime(CLOCK_MONOTONIC, &time) should be same as steady_clock::now() but it is more than x3 times faster!

Here is my test, for completeness.

#include <stdio.h>
#include <chrono>
#include <ctime>

void print_timediff(const char* prefix, const struct timespec& start, const 
struct timespec& end)
{
    double milliseconds = end.tv_nsec >= start.tv_nsec
                        ? (end.tv_nsec - start.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec) * 1e3
                        : (start.tv_nsec - end.tv_nsec) / 1e6 + (end.tv_sec - start.tv_sec - 1) * 1e3;
    printf("%s: %lf milliseconds\n", prefix, milliseconds);
}

int main()
{
    int i, n = 1000000;
    struct timespec start, end;

    // Test stopwatch
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i) {
        struct timespec dummy;
        clock_gettime(CLOCK_MONOTONIC, &dummy);
    }
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("clock_gettime", start, end);

    // Test chrono system_clock
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i)
        auto dummy = std::chrono::system_clock::now();
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("chrono::system_clock::now", start, end);

    // Test chrono steady_clock
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i)
        auto dummy = std::chrono::steady_clock::now();
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("chrono::steady_clock::now", start, end);

    // Test chrono high_resolution_clock
    clock_gettime(CLOCK_MONOTONIC, &start);
    for (i = 0; i < n; ++i)
        auto dummy = std::chrono::high_resolution_clock::now();
    clock_gettime(CLOCK_MONOTONIC, &end);
    print_timediff("chrono::high_resolution_clock::now", start, end);

    return 0;
}

And this is the output I get when compiled with gcc7.2 -O3:

clock_gettime: 24.484926 milliseconds
chrono::system_clock::now: 85.142108 milliseconds
chrono::steady_clock::now: 87.295347 milliseconds
chrono::high_resolution_clock::now: 84.437838 milliseconds

M
Mike Weller

Internally the function will access the system's clock, which is why it returns different values each time you call it. In general with non-functional languages there can be many side effects and hidden state in functions which you can't see just by looking at the function's name and arguments.


S
Safar Ligal

From what is see, tv_sec stores the seconds elapsed while tv_usec stored the microseconds elapsed separately. And they aren't the conversions of each other. Hence, they must be changed to proper unit and added to get the total time elapsed.

struct timeval startTV, endTV;

gettimeofday(&startTV, NULL); 

doSomething();
doSomethingLong();

gettimeofday(&endTV, NULL); 

printf("**time taken in microseconds = %ld\n",
    (endTV.tv_sec * 1e6 + endTV.tv_usec - (startTV.tv_sec * 1e6 + startTV.tv_usec))
    );

K
Kevin Kreiser

I needed to measure the execution time of individual functions within a library. I didn't want to have to wrap every call of every function with a time measuring function because its ugly and deepens the call stack. I also didn't want to put timer code at the top and bottom of every function because it makes a mess when the function can exit early or throw exceptions for example. So what I ended up doing was making a timer that uses its own lifetime to measure time.

In this way I can measure the wall-time a block of code took by just instantiating one of these objects at the beginning of the code block in question (function or any scope really) and then allowing the instances destructor to measure the time elapsed since construction when the instance goes out of scope. You can find the full example here but the struct is extremely simple:

template <typename clock_t = std::chrono::steady_clock>
struct scoped_timer {
  using duration_t = typename clock_t::duration;
  const std::function<void(const duration_t&)> callback;
  const std::chrono::time_point<clock_t> start;

  scoped_timer(const std::function<void(const duration_t&)>& finished_callback) :
      callback(finished_callback), start(clock_t::now()) { }
  scoped_timer(std::function<void(const duration_t&)>&& finished_callback) :
      callback(finished_callback), start(clock_t::now()) { }
  ~scoped_timer() { callback(clock_t::now() - start); }
};

The struct will call you back on the provided functor when it goes out of scope so you can do something with the timing information (print it or store it or whatever). If you need to do something even more complex you could even use std::bind with std::placeholders to callback functions with more arguments.

Here's a quick example of using it:

void test(bool should_throw) {
  scoped_timer<> t([](const scoped_timer<>::duration_t& elapsed) {
    auto e = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(elapsed).count();
    std::cout << "took " << e << "ms" << std::endl;
  });

  std::this_thread::sleep_for(std::chrono::seconds(1));

  if (should_throw)
    throw nullptr;

  std::this_thread::sleep_for(std::chrono::seconds(1));
}

If you want to be more deliberate, you can also use new and delete to explicitly start and stop the timer without relying on scoping to do it for you.


k
kibibu

They are they same because your doSomething function happens faster than the granularity of the timer. Try:

printf ("**MyProgram::before time= %ld\n", time(NULL));

for(i = 0; i < 1000; ++i) {
    doSomthing();
    doSomthingLong();
}

printf ("**MyProgram::after time= %ld\n", time(NULL));

D
David Božjak

The reason both values are the same is because your long procedure doesn't take that long - less than one second. You can try just adding a long loop (for (int i = 0; i < 100000000; i++) ; ) at the end of the function to make sure this is the issue, then we can go from there...

In case the above turns out to be true, you will need to find a different system function (I understand you work on linux, so I can't help you with the function name) to measure time more accurately. I am sure there is a function simular to GetTickCount() in linux, you just need to find it.


o
oliora

I usually use the following:

#include <chrono>
#include <type_traits>

using perf_clock = std::conditional<
    std::chrono::high_resolution_clock::is_steady,
    std::chrono::high_resolution_clock,
    std::chrono::steady_clock
>::type;

using floating_seconds = std::chrono::duration<double>;

template<class F, class... Args>
floating_seconds run_test(Func&& func, Args&&... args)
{
   const auto t0 = perf_clock::now();
   std::forward<Func>(func)(std::forward<Args>(args)...);
   return floating_seconds(perf_clock::now() - t0);
} 

It's the same as @nikos-athanasiou proposed except that I avoid using of a non-steady clock and use floating number of seconds as a duration.


On this typeswitch : Typically high_resolution_clock is a typedef for either system_clock or steady_clock. So to trace that std::conditional if the is_steady part is true, then you pick the high_resolution_clock which is (a typedef to) the steady_clock. If it's false then you pick the steady_clock again. Just use steady_clock from the beginning ...
@nikos-athanasiou I completely agree with the comment from 5gon12eder that "typical" case is not required by the standard so some STL may be implemented in a different way. I prefer to have my code to be more generic and not related on implementation details.
It's not required but explicitly stated in 20.12.7.3 : high_resolution_clock may be a synonym for system_clock or steady_clock. The reason is this : high_resolution_clock represents clocks with the shortest tick period, so whatever the implementation, it has two choices, being steady or not. Whatever choice we make, saying that the implementation will differ from the other two clocks is like saying we have a better implementation for a steady (or not) clock that we choose not to use (for steady or not clocks). Knowing how is good, knowing why is better
@nikos-athanasiou I would prefer to be 100% safe especially when this cost me no runtime overhead and undetectable compile time overhead. You may rely on "may" and assamptions if you want.
au contraire my friend, it's you that relies on "may", but suit yourself. If you want to be 100% sure and keep writing this then, you should also find a way, for you and users of your code, to avoid non-portably mixing time points of different clocks (if ever this type switch acquires a meaning, it'll behave differently on different platforms). Have fun!
Y
Yas

Matlab flavored!

tic starts a stopwatch timer to measure performance. The function records the internal time at execution of the tic command. Display the elapsed time with the toc function.

#include <iostream>
#include <ctime>
#include <thread>
using namespace std;

clock_t START_TIMER;

clock_t tic()
{
    return START_TIMER = clock();
}

void toc(clock_t start = START_TIMER)
{
    cout
        << "Elapsed time: "
        << (clock() - start) / (double)CLOCKS_PER_SEC << "s"
        << endl;
}

int main()
{
    tic();
    this_thread::sleep_for(2s);
    toc();

    return 0;
}

This is not a good approach. Because it uses a global variable by default, it’s not reentrant. An inner function using the same interface could screw up the timing of the outer interface.
W
Weather Vane

In answer to OP's three specific questions.

"What I don't understand is why the values in the before and after are the same?"

The first question and sample code shows that time() has a resolution of 1 second, so the answer has to be that the two functions execute in less than 1 second. But occasionally it will (apparently illogically) inform 1 second if the two timer marks straddle a one second boundary.

The next example uses gettimeofday() which fills this struct

struct timeval {
    time_t      tv_sec;     /* seconds */
    suseconds_t tv_usec;    /* microseconds */
};

and the second question asks: "How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec?"

My second answer is the time taken is 0 seconds and 26339 microseconds, that is 0.026339 seconds, which bears out the first example executing in less than 1 second.

The third question asks: "What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?"

My third answer is the time taken is 4 seconds and 45025 microseconds, that is 4.045025 seconds, which shows that OP has altered the tasks performed by the two functions which he previously timed.


S
SickGuy

Here's a simple class that will print the duration between the time it got in and out of scope in the specified duration unit:

#include <chrono>
#include <iostream> 

template <typename T>
class Benchmark
{
   public:
    Benchmark(std::string name) : start(std::chrono::steady_clock::now()), name(name) {}
    ~Benchmark()
    {
        auto end = std::chrono::steady_clock::now();
        T duration = std::chrono::duration_cast<T>(end - start);
        std::cout << "Bench \"" << name << "\" took: " << duration.count() << " units" << std::endl;
    }

   private:
    std::string name;
    std::chrono::time_point<std::chrono::steady_clock> start;

};
int main()
{
  Benchmark<std::chrono::nanoseconds> bench("for loop");
  for(int i = 0; i < 1001000; i++){}
}

Example usage:

int main()
{
  Benchmark<std::chrono::nanoseconds> bench("for loop");
  for(int i = 0; i < 100000; i++){}
}

Outputs:

Bench "for loop" took: 230656 units

T
TarmoPikaro
#include <ctime>
#include <functional>

using namespace std;

void f() {
  clock_t begin = clock();

  // ...code to measure time...

  clock_t end = clock();

  function<double(double, double)> convtime = [](clock_t begin, clock_t end)
  {
     return double(end - begin) / CLOCKS_PER_SEC;
  };

  printf("Elapsed time: %.2g sec\n", convtime(begin, end));

}

Similar example to one available here, only with additional conversion function + print out.


S
Sirn Nguyen Truong

I have created a class to automatically measure elapsed time, Please check the code (c++11) in this link: https://github.com/sonnt174/Common/blob/master/time_measure.h

Example of how to use class TimeMeasure:

void test_time_measure(std::vector<int> arr) {
  TimeMeasure<chrono::microseconds> time_mea;  // create time measure obj
  std::sort(begin(arr), end(arr));
}

I like your print statement with the units. What would it take to port your code to gcc and clang? (wandbox.org)
@HowardHinnant: thanks for addressing, I updated the code for gcc and clang as well.