Jeff Koftinoff
2008-11-20 23:02:19 UTC
It surprised me to see the following code, compiled with GNU GCC 4.0:
#include <stdint.h>
#include <iostream>
template <typename T>
void show( T a )
{
std::cout << "calls: " << __PRETTY_FUNCTION__ << std::endl;
}
#define compare(T1,T2) { T1 a = 100; T2 b = 666; std::cout << #T1 " *
" #T2 " "; show(a * b); }
int main()
{
compare(float,int64_t);
compare(int64_t,float);
compare(float,double);
compare(double,float);
compare(int8_t,int16_t);
compare(int16_t,int8_t);
compare(int32_t,uint16_t);
compare(uint16_t,int32_t);
compare(char,int32_t);
compare(int32_t,char);
compare(int64_t,int32_t);
compare(int32_t,int64_t);
}
It outputs the following:
float * int64_t calls: void show(T) [with T = float]
int64_t * float calls: void show(T) [with T = float]
float * double calls: void show(T) [with T = double]
double * float calls: void show(T) [with T = double]
int8_t * int16_t calls: void show(T) [with T = int]
int16_t * int8_t calls: void show(T) [with T = int]
int32_t * uint16_t calls: void show(T) [with T = int]
uint16_t * int32_t calls: void show(T) [with T = int]
char * int32_t calls: void show(T) [with T = int]
int32_t * char calls: void show(T) [with T = int]
int64_t * int32_t calls: void show(T) [with T = long long int]
int32_t * int64_t calls: void show(T) [with T = long long int]
It would have made more sense to me if a float * int64_t would be a
double, or even a int64_t, but not a float... What is the real rule?
I had just assumed that the type with the most precision would be
chosen.
--jeffk++
#include <stdint.h>
#include <iostream>
template <typename T>
void show( T a )
{
std::cout << "calls: " << __PRETTY_FUNCTION__ << std::endl;
}
#define compare(T1,T2) { T1 a = 100; T2 b = 666; std::cout << #T1 " *
" #T2 " "; show(a * b); }
int main()
{
compare(float,int64_t);
compare(int64_t,float);
compare(float,double);
compare(double,float);
compare(int8_t,int16_t);
compare(int16_t,int8_t);
compare(int32_t,uint16_t);
compare(uint16_t,int32_t);
compare(char,int32_t);
compare(int32_t,char);
compare(int64_t,int32_t);
compare(int32_t,int64_t);
}
It outputs the following:
float * int64_t calls: void show(T) [with T = float]
int64_t * float calls: void show(T) [with T = float]
float * double calls: void show(T) [with T = double]
double * float calls: void show(T) [with T = double]
int8_t * int16_t calls: void show(T) [with T = int]
int16_t * int8_t calls: void show(T) [with T = int]
int32_t * uint16_t calls: void show(T) [with T = int]
uint16_t * int32_t calls: void show(T) [with T = int]
char * int32_t calls: void show(T) [with T = int]
int32_t * char calls: void show(T) [with T = int]
int64_t * int32_t calls: void show(T) [with T = long long int]
int32_t * int64_t calls: void show(T) [with T = long long int]
It would have made more sense to me if a float * int64_t would be a
double, or even a int64_t, but not a float... What is the real rule?
I had just assumed that the type with the most precision would be
chosen.
--jeffk++
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]