[Please do not mail me a copy of your followup]
Post by Scott LurndalGranted most of that code ran sans OS (it was the OS), and required
fixed types to map to various hardware registers.
This is the situation where I think the size of the data really *does*
matter and it's important to use sized types and not the implicit size
of int, long, etc. Things like structures representing byte streams
passed across the network (and you might do network-to-host byte
reordering in place on that structure), raw byte streams read from or
written to files, raw bytes transmitted between processes through
shared memory segments and so-on.
I've got some code im my open source project that doesn't use
specifically sized types for some binary file I/O and it's a mess right
because they used generic types. So I am certainly sympathetic to
cases where it matters.
My assertion is that it simply doesn't matter in *every* case.
Post by Scott LurndalHowever, absent API requirements for other types, I would prefer
using types for which I understand the characteristics in all
conditions, thus I prefer the explicitly sized types. Having
run into many issues in the past porting software from 16-bit
ints to 32-bit ints (and from 32-bit longs to 64-bit longs),
I would never advocating using 'int' for anything.
Here, I disagree. The size of every int in a program isn't a
portability concern. What's important is deciding which variables need
specific sizes and which don't.[*]
I've seen code where the compiler's default size of an int was 16-bits
and everywhere they wanted to iterate over containers or whatnot it was
int16_t all over the place. Then you move to a compiler where the
defaault size of an int is 32-bits. The fact that all those ints were
marked as 16-bits is now erroneous and simply a distraction. How do
you know which ones really needed to be 16-bits and which were 16-bits
simply because that was the default size of an int? Forcing them all
into a 16-bit straight jacket impedes portability instead of enhancing
it.
In other words, like most things in programming, it's a matter of good
judgment. Simplistically applying a rule like "never use int" is
opening up your skull and dumping your brains in the garbage. There is
a time when specifically sized types are important.
For most of the C++ I have worked on in the past 25 years, it was
important in only a very few cases. Working on code where the team
insisted on sizing every single named quantity in the application was
tedious and yielded little to no value. I have done very little
programming in embedded environments with strict resource limits and I
can see how someone who spent 25 years in that environment would
consider it indispensible that everything be specifically sized. So it
varies with experience and problem domain.
But this is just another reason to advocate for proper application of
good judgment for your problem domain instead of adopting a simplistic
rule. Even within a problem domain, things can change over time.
Embedded processors today have access to many more resources than they
did in the 80s when 64 users time shared out of 128KB of main memory
and an embedded CPU was lucky to have 128 bytes of RAM and 16K of ROM.
[*] Aside: if sizing is all that important, does that mean you encode
the byte size of a struct into its name? I mean, if it's really that
important for int to be declared to be a specific size, but you don't
similarly mandate the same thing for structs and classes, then this
is an academic exercise in pedantry.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]