The goal is to show that in some context instead of using run-time to compute something, you can compute that at compile-time.
- run-time, is what you want to optimize.
- compile-time, what you can increase (in some limit) without impact on your customer.
See below a simple recursive function to compute the n-th element of the Fibonacci suite.
int Fibonacci_recursive(int nieme)
{
if( 0 == nieme)
return 0;
if( 1 == nieme)
return 1;
return Fibonacci_recursive(nieme-1) + Fibonacci_recursive(nieme - 2);
}
Now if you use that function "Somewhere in your code", and disassemble the resulting binary, you will see something like:
00E91357 call Fibonacci_recursive (0E91270h)
The code you call use a lot of op-code and call itself several time depending of the element you ask. This kind of computation is not time-constant !
So it will be interesting if compiler can compute the value! At run-time we will have directly have the value in constant-time.... To achieve that goal, you can use Macro (Preprocesor), or Template (C++ Meta Programming). I choose the template way :
template<int I> struct Fibonacci_meta {
enum { value = Fibonacci_meta<I-1>::value + Fibonacci_meta<I-2>::value };
};
template<> struct Fibonacci_meta<1> {
enum { value = 1 };
};
template<> struct Fibonacci_meta<0> {
enum { value = 0 };
};
And now if you write Fibonacci_meta<10>::value in your code, and analyze the result, you will just see something like:
008F7BFF push 37h
And as you may know 0x37 == 55... the 10-th element of Fibonacci.
It's a simple example, but we removed several function call and have let the compiler find the result at compile-time!!!!