In the C99 standard the datatype bool was added. When thinking about a bool people typically imagine something which can be either true or false, zero or one; a single bit.
In a modern computer you cannot actually address single bits individually. Everything is addressed in multiples of at least 8 bits (i.e. a byte). You could write a simple program to figure out the size of a bool.
struct foo{
bool happy;
int full_emotion_spectrum;
bool depressed;
};
int main(){
struct foo f;
printf("foo takes up %lu byte(s)\n", sizeof(f));
}
bool happy;
bool depressed;
int full_emotion_spectrum;
};
...
unsigned int settings = 0;
//setting the fullscreen bit to true
settings = settings | SET_FULLSCREEN;
// setting the auto respawn bit to false
settings = settings & (~SET_AUTO_RESPAWN);
Linus Torvalds, the creator of Linux, does not recommend the use of the bool datatype (https://lkml.org/lkml/2013/8/31/138). He probably knows how to code a little bit and thus it is not a bad idea to follow his lead. I never use bool and I wish it weren't in the C language. It misleads beginners into thinking they are using less space than they really are and it is just yet another opportunity to trip up.
1. Readability of code
Clearly written code is a whole topic onto itself but in my opinion the name of the variable is more important than the choice of the datatype. This is somewhat subjective but I consider the code snippet above to be very readable.
2. Lots of space on modern hardware
I don't buy into the notion that just because there is lots of space, one shouldn't worry about wasting it. Nowadays software typically relies on a lot of dependencies. Tiny inefficiencies each step of the way build up to significant differences.
Also, the space argument only applies in cases where you use multiple booleans. What I am suggesting is that when you have only one boolean, just use an int as this is how much memory will be used anyway.
3. "Never" using bool is too extreme/insane
Writing good code is in part about cultivating good habits. The bool datatype never gives you something more than you would have with an int. If you never use it, you don't have to think about when it would be okay to use it. It reduces cognitive load.
Let's take an example where you write code for a till in a shop. People can either pay with cash or card. You feel confident there are only two options so you use a bool (e.g. bool payment_option; ). Later it becomes possible to pay with gift cards. Now you need to go back and change the datatype. If you had used an int, you simply add another option.
4. Pre-mature optimization
I was quite surprised about this argument. My article is not about optimization at all (pre-mature or otherwise). Optimization means you first write the code one way and make sure it works fine. After that, you go back over it and think about how to do the job better; either in terms of speed or memory usage.
My suggestion was simply never to use bool and always use an int since it typically uses the same amount of memory anyway. I agree that one shouldn't optimize prematurely and focus on making the code work first.
5. I shouldn't assume an int is always 32 bits.
I agree with that one and it would have been better to use a uint32_t in my code snippet above.