Software Engineers use Big O notation to get a quick estimate of the time taken by an algorithm to process "N" items of data. The most typical values are explained below.
Big O notation for Common Rates of Growth
- O(c) or O(1)
O(c), for any constant c, means that an algorithm takes a constant time [whatever] to finish, regardless of the number of inputs. Please remember that technically O(1), O(2), O(3) . . . O(c), etc., all mean the same thing. There is no difference amongst them. However, the preferred symbol for a constant growth rate is O(1). So whenever the growth rate is mentioned as O(1), then it implies that the algorithm finishes in a time which is independent of the amount of data processed by it. This is an ideal algorithm.
- O(log N)
O(log N) denotes a logarithmic growth rate - the base is usually 2, but can be any. The whole point is that the execution time doubles if the number of inputs are squared. This is the reason: log N2 = 2 log N. Take an example. Let us say we have an algorithm labelled as O(log N). Also, let us assume that it takes 1 second to process an array of 10000 int numbers. The execution time will double to 2 seconds when the number of inputs are 100000000! So you can conclude that O(log N) grows very slowly, and therefore, the algorithm is a good algorithm, although not as good as the above O(1) algorithm.
- O(logk N)
This order of growth is log-powered-k, and it grows faster than O(log N)
- O(N) - linear growth
This symbol represents a linear growth rate - proportional growth rate. Please again note that O(2N), O(3N), etc., are also linear, and all of them are equivalent. However, all linear rates are customarily clubbed together as O(N). Let's take an example. Consider a simple algorithm implemented with a for-loop that prints "hello". The time taken by this for-loop is directly proportional to the number of iterations, and, therefore, it will be labelled as O(N). Please note that the growth will be O(N) even if more printf statements are added to the for-loop because the time taken to execute will still be directly proportional to the number of iterations. The "key" is "direct proportionality", and not 2x, 3x, etc.,
- O(N log N) - linear-log growth
An algorithm that has multiple recursive calls usually exhibits this type of growth.
- O(N2), O(N3) - quadratic, cubic
These growth rates are called quadratic and cubic rates. Nested for-loops are typical examples of these growth rates.
- O(2N) - exponential
These are exponential rates. An algorithm can be labelled as O(2N) if it takes 2, 4, 8, 16 seconds to, respectively, process 1, 2, 3, 4 inputs.
Ascending order of the Rates
The above growth rates, listed from the best to worst are respectively: O(1) > O(log N) > O(log k N) > O(N) > O(N log N) > O(N2) > O(N3) > O(2N)
This Blog Post/Article "(Algorithms) Explanation of the common big-O rates of growth for beginners" by Parveen is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.