Table of Contents (top down ↓)
Software Engineers use Big O notation to get a quick estimate of the time taken by an algorithm to process "N" items of data. The most typical values are explained below.
Big O notation for Common Rates of Growth
 O(c) or O(1)

O(c), for any constant c, means that an algorithm takes a constant time [whatever] to finish, regardless of the number of inputs. Please remember that technically O(1), O(2), O(3) . . . O(c), etc., all mean the same thing. There is no difference amongst them. However, the preferred symbol for a constant growth rate is O(1). So whenever the growth rate is mentioned as O(1), then it implies that the algorithm finishes in a time which is independent of the amount of data processed by it. This is an ideal algorithm.
 O(log N)

O(log N) denotes a logarithmic growth rate  the base is usually 2, but can be any. The whole point is that the execution time doubles if the number of inputs are squared. This is the reason: log N^{2} = 2 log N. Take an example. Let us say we have an algorithm labelled as O(log N). Also, let us assume that it takes 1 second to process an array of 10000 int numbers. The execution time will double to 2 seconds when the number of inputs are 100000000! So you can conclude that O(log N) grows very slowly, and therefore, the algorithm is a good algorithm, although not as good as the above O(1) algorithm.
 O(log^{k} N)

This order of growth is logpoweredk, and it grows faster than O(log N)
 O(N)  linear growth

This symbol represents a linear growth rate  proportional growth rate. Please again note that O(2N), O(3N), etc., are also linear, and all of them are equivalent. However, all linear rates are customarily clubbed together as O(N). Let's take an example. Consider a simple algorithm implemented with a forloop that prints "hello". The time taken by this forloop is directly proportional to the number of iterations, and, therefore, it will be labelled as O(N). Please note that the growth will be O(N) even if more printf statements are added to the forloop because the time taken to execute will still be directly proportional to the number of iterations. The "key" is "direct proportionality", and not 2x, 3x, etc.,
 O(N log N)  linearlog growth

An algorithm that has multiple recursive calls usually exhibits this type of growth.
 O(N^{2}), O(N^{3})  quadratic, cubic

These growth rates are called quadratic and cubic rates. Nested forloops are typical examples of these growth rates.
 O(2^{N})  exponential

These are exponential rates. An algorithm can be labelled as O(2^{N}) if it takes 2, 4, 8, 16 seconds to, respectively, process 1, 2, 3, 4 inputs.
Ascending order of the Rates
The above growth rates, listed from the best to worst are respectively: O(1) > O(log N) > O(log ^{k} N) > O(N) > O(N log N) > O(N^{2}) > O(N^{3}) > O(2^{N})
This Blog Post/Article "(Algorithms) Explanation of the common bigO rates of growth for beginners" by Parveen is licensed under a Creative Commons AttributionNonCommercialShareAlike 4.0 International License.