A flag is a logical concept, not a special type of variable. The concept is that a variable records the occurrence of an event. That it is set "one way" if the event happened, and set "the other way" if the event did not happen.
The common example of a flag is the flag on a mailbox. Assume the flag on a mailbox is down at the start of the day. If the mail carrier brings any mail for the occupant, they put the flag up. When the occupant comes home later, they can look at the flag and tell whether there is mail to get or not. The occupant is not concerned with how many pieces of mail there are, or when the mail carrier brought the mail. They want to know the answer to a binary question: is there mail to get?
Very often flags are variables that are allowed to only have TWO values. In most languages you find a "logical" type or a "Boolean" (after George Boole) type. This is better because flags should only have 2 values, and the Boolean values are usually allowed to be True and False. (Those are the constants that Python uses.)
Any data type can be used as a flag. An integer could have the value 1 if something happened and 0 if it did not. BUT because it is an integer, and the language will allow it to have any integer value, the flag could accidently be set to an inconsistent value, like 23. A string could be used, "yes" and "no" perhaps as the flag values. Again it is not the best type because strings can have many different values not just those two.
Python does not have a strong typing system, so any variable can be given a value of any type. But if the programmer is disciplined, they can think of a flag variable as having ONLY the values True and False.
The problem is to allow 5 numbers to be input and determine if any of them were larger than 2000. What is the difference in effect of these 3 segments of code?
## this one is NOT good! it is bad! big_number_flag = False for i in range(5): n = int (input("Enter a number")) if n > 2000: big_number_flag = True else: big_number_flag = False # after the loop if big_number_flag: print("saw at least one big number") else: print("didn't see any big numbers")
After the loop is finished, all you can say about the big_number_flag is that it is True or False, depending on ONLY the LAST value input, not all 5 values. It is very tempting to write flag code like this if you are not thinking. Not every if needs an else!
#---------------------------------------------- # this one is bad too! big_number_flag = True ## wrong initialization! for i in range(5) n = int (input("Enter a number")) if n > 2000: big_number_flag = True # after the loop if big_number_flag: print("saw at least one big number") else: print("didn't see any big numbers")
In this case, it is possible to say after the loop that the value of the big_number_flag WILL be True. BUT that means nothing because it may be True because of the initialization, OR because it was set (reset) to True inside the loop. Note a very obvious bug here - NOwhere is the flag set to False. If there is no way for it to be set to anything but True, it is useless.
#---------------------------------------------- # this one is MUCH better big_number_flag = False for i in range(5): n = int (input("Enter a number")) if n > 2000: big_number_flag = True
After the loop, if the big_number_flag is True, SOME value was seen that was more than 2000 - could have been any one (or more than one) of the 5 values but that's ok. If the big_number_flag is False, you can be sure that no number was seen over 2000. This code is the usual way to write the logic for a flag.
This code does the three things in this order:
To summarize, you should be able to write a sentence about any variable that you use as a flag. "error_seen will be True if any error in inputs has been seen (where error means ...) and it will be False if no errors have been seen." This should be an "invariant", meaning that this statement should be True throughout the program. This means you have to think about "what do I initialize it to at the start?" Well, I haven't seen any errors if I haven't seen any input, so it's going to start as False. This means you test the flag when it is meaningful to do so. You usually don't test it during the main processing, you test it afterwards. You set it during the processing as needed.
Flags are very useful in cases where knowing that an event happened is necessary at a future time, but not right at the moment. For some reason the event cannot be dealt with until afterwards. A flag is a "memory" that can be used later to process the fact that the event happened.