When the code is waiting for some condition in which delay time is not deterministic, it looks like many people choose to use exponential backoff, i.e. wait N seconds, check if the condition satisfies; if not, wait for 2N seconds, check the condition, etc. What is the benefit of this over checking in a constant/linearly increasing time span?

Exponential back-off is useful in cases where simultaneous attempts to do something will interfere with each other such that *none* succeed. In such cases, having devices randomly attempt an operation in a window which is too small will result in most attempts failing and having to be retried. Only once the window has grown large enough will attempts have any significant likelihood of success.

If one knew in advance that 16 devices would be wanting to communicate, one could select the size of window that would be optimal for that level of loading. In practice, though, the number of competing devices is generally unknown. The advantage of an exponential back-off where the window size doubles on each retry is that *regardless of the number of competing entities*:

The window size where most operations succeed will generally be within a factor of two of the smallest window size where most operations would succeed,

Most of the operations which fail at that window size will succeed on the next attempt (since most of the earlier operations will have succeeded, that will leave less than half of them competing for a window which is twice as big), and

- The total time required for all attempts will end up only being about twice what was required for the last one.

If, instead of doubling each time, the window were simply increased by a constant amount, then the time spent retrying an operation until the window reached a usable size would be proportional to the square of whatever window size was required. While the final window size might be smaller than would have been used with exponential back-off, the total cost of all the attempts would be much greater.

This is the behavior of TCP congestion control. If the network is extremely congested, effectively no traffic gets through. If every node waits for a constant time before checking, the traffic just for checking will continue to clog the network, and the congestion never resolves. Similarly for a linear increasing time between checks, it may take a long time before the congestion resolves.

Assuming you are referring to testing a condition before performing an action:

- Exponential backoff is beneficial when the cost of testing the condition is comparable to the cost of performing the action (such as in network congestion).
- if the cost of testing the condition is much smaller (or negligible), then a linear or constant wait can work better, provided the time it takes for the condition to change is negigible as well.

For exemple, if your condition is a complex (slow) query against a database, and the action is an update of the same database, then every check of the condition will negatively impact the database performance, and at some point, without exponential backoff, checking the condition by multiple actors could be enough to use all database resources.

But if the condition is just a lightweight memory check (f.i. a critical section), and the action is still an update of a database (at best tens of thousandths of times slower than the check), and if the condition is flipped in a negligible time at the very start of the action (by entering the critical section), then a constant or linear backoff would be fine. Actually under this particular scenario, an exponential backoff would be detrimental as it would introduce delays in situations of low load, and is more likely to result in time-outs in situations of high load (even when the processing bandwidth is sufficient).

So to summarize, exponential backoff is a hammer: it works greats for nails, not so much for screws :)