Fundamentals
Set theme to dark (⇧+D)

Big O Notation

In short the Big O Notation is a mathemathical notation that describes the limiting behavior of a function if the argument tends towards a particular value or infinitiy. In layman’s terms it means: if the load doubles, how much more compute do I need?

The Big O Notation can be used to describe how the Performance of software change if the load increases. If the load doubles, do I need twice the amount of resources? Less than twice? More than twice? If Scaling is to be possible at all, it must be less than twice.

Though theoretically you could write any formula, there are a few common ones:

  • O(1): Regardless of the size of the input, processing will always take the same time.
  • O(log n): If the data size doubles, then the time it takes to process it increases with a constant value.
  • O(n): If the dataset is twice as large, processing it will take twice as long.
  • O(m log n): If the data size doubles, then the time it takes to process it doubles AND increases with a constant value.
  • O(n2): If the data size doubles, the time it takes to process it quadruples.

Any process that behaves worse than O(n) cannot be Scaled. Scaling an O(n) is very expensive. One should aim for implementations that are O(log n) (which is usually possible) or O(1) (which is hard to achieve).