Algorithm Series: Big O Notation in less than 5 minutes

Maria Le
4 min readOct 18, 2020

Hello! This is the start of a blog series for learning algorithms. Before we start learning different algorithms and data structures to use as tools, we must understand Big O in relationship to our solutions.

What is Big O Notation?

Simply put, Big O notation measures the efficiency of our algorithms. The “O” stands for “order” which is the term in math that measures the growth rate of a function. Similarly, in programming, it relates to how long the runtime of your algorithm is based on the input size that is passed in. It’s a way to measure how our code performs compared to other approaches of solving the same problem. We use Big O to reference time complexity (runtime speed) and space complexity or memory. This article will only give examples referring to time complexity, and I’ll cover space complexity in another blog post.

So how do we label the Big O of our algorithm?

The chart above displays the different complexities we can describe the Big O of our algorithm with. In this article, I’m only going to cover the basis of O(1), O(n), and O(n²).

  • O(1) constant time
  • O(n) linear time
  • O(n²) quadratic time

Different computers will have different runtime speeds based on how fast they are at compiling, so instead of measuring Big O by actually timing, we will count the number of operations we declare in our algorithm. Below will be examples for simplifying Big O in 3 different functions.

Constant Time O(1)

O(1) refers to when the time it takes for when n grows the runtimes stays constant. An example of this would be:

//constant runtime example//let's count the operations, *, +, and / = 3 operationsfunction constantTime(n){
return n * (n + 6) / 2;
}

No matter how large n is whether the input is 8 or 10000000000, the algorithm we are implementing will only have 3 operations. The Big O will be O(1), which is fast, but most of our complex algorithms will be something different like linear or logarithmic time.

Linear Time O(n)

In comparison, we have linear time which means as n gets larger, the runtime will get larger as well.

//linear time example
//because we have a loop, our # of operations grow as n does
function linearTime(n){
let total = 0
for (let i = 0; i <= n; i++){
total += i;
}
return total
}

Let’s say we call linearTime(10). Since we are looping, we will perform addition 10 times, so O(10n), but we don’t care about coefficients, so the function’s Big O will be simplified to O(n). If we called linearTime(1), we only perform addition once. When we count our operations in this function, they will grow in proportion to when n grows.

Quadratic Time O(n²)

The last example case we will cover is quadratic time. This means that as n grows the runtime grows at n².

//quadratic time example
//the runtime grows quadratically as n grows
function quadTime(n){
for (let i = 0; i < n; i++){
for (let j = 0; j < n; j++){
console.log(i, j);
}
}
}

We have a nested loop. The inner loop as we saw in the linear time example is O(n). Then we have the outer loop which is also O(n). Since we have a nested loop in our above example the Big O is O(n*n), simplifying to O(n²).

Let’s go through a few test cases:

  1. First we call quadTime(2). 2 is our n in this case so, we would console.log(i, j) 4 times.
  2. Next let’s call quadTime(4). n=4, so if our function’s runtime is growing quadratically, we can expect to console.log(i, j) 16 times.

Below I ran our examples in the javascript console:

Thanks for reading, and please give me any feedback on this post whether constructive or encouraging. If you have any suggestions for future posts, please let me know too!

Happy coding!

Mars

--

--