Better Programming

Advice for programmers.

Follow publication

The Basics of Big O Notation

Vannida Lim
Better Programming
Published in
6 min readFeb 19, 2020

--

Photo by Panos Deligiannidis on Unsplash

As a developer, you may have heard the phrase “Big O Notation” thrown around. What is it, exactly? And why should you care?

It turns out that learning about Big O Notation is not only necessary for technical interviews, it’s also an important concept to implement in order to write fast and efficient code.

I’ve never been a big math person — it was honestly my most hated subject in school — so just hearing about algorithms and Big O was intimidating enough. However, it’s a fairly simple concept to grasp, so let’s dive in!

Why Big O Notation?

Big O addresses what makes one function better than another. There are multiple ways to solve one problem but the best solution is not always the one that works. This is where Big O comes in. Big O is basically a way of systematically classifying the runtime of an algorithm — the faster, the better. Although different machine hardware can also affect runtime, Big O doesn’t account for this issue — it doesn’t care for precision, just the general trend of duration. Therefore when we talk about how fast a function processes, it is known as time complexity. When we talk about space complexity, we’re referring to the algorithm itself and how much memory it takes up. In this piece, we’ll only be looking at the time complexity of Big O.

What Exactly is Big O?

You may have seen this bad guy before; Source: https://www.bigocheatsheet.com/

If we look at the chart above, the x-axis is labeled “elements” and the y-axis is labeled “operations. Elements accounts for how many inputs a function might need to process, and operations accounts for how many operations it will take to process these inputs. The graph is conveniently colored with a spectrum of green to red, with green being “excellent” and red being “horrible.”

I want to talk about three of the seven Os we see — O(1), O(n), and O(n²) — as these are the three most basic concepts. Let’s start in the green area with O(1).

Big O(1) — “O of 1” — Constant

O(1) is essentially the fastest runtime. It’s considered a constant time complexity because, as the number of elements or inputs grows, there will be no change in the runtime. A simple analogy would be telling the 10th person in line to come to the front desk. There is only one person occupying the 10th position and only one front desk to go to.

In the example below, we’re accessing an element in the array by the index m. Even if myArrhad an infinite amount of elements, findElementInArr() would still only return one element of that index. This function wouldn’t go through or iterate over every element — it would simply go to that element, like a shortcut.

O(1) Example:
let myArr = ["a", "b", "c", "d"]
const findElementInArr = (m, arr) => {
return arr[m];
};
findElementInArr(3, myArr); //=> d

As a rule of thumb, it’s helpful to note that basic mathematical operations, variable assignments, accessing elements in arrays and objects are constants. Let’s see how O(n) compares to O(1).

Big O(n) — “O of n” — Linear

On the chart, we move from the excellent green zone of O(1) to a more unsure but fair yellow zone of O(n).

O(n) is considered to be a linear time complexity because the time it takes for a function to process is bounded by a multiple of n. In other words, the time it takes for the operations to finish processing is directly correlated to the size of the input n. If the size of n is large, the time it will take to run the algorithm will be too. We can think of any loops, or “look up” functions, such as built-in JavaScript methods like map, filter, or forEach to be O(n).

Back to my simple analogy, we can think of every person in that line showing their number position one by one. In the example below, we see that this function has a basic for loop that must lookup every element in the array to see what matches, then console logs each element one after another.

O(n) Example: 
let myArr = ["a", "b", "c", "d"]
const printEachLetter = (arr) => {
for(i =0; i < arr.length; i++){
console.log(arr[i])
}
}
printEachLetter(myArr);
//=> a
//=> b
//=> c
//=> d

O(n)’s runtime is not so great but it’s not so bad either. It could be a safe choice. Let’s move onto the final concept, O(n²).

O(n²) — “O of n Squared” — Quadratic

Last but not least (well, maybe least since it is considered the worst runtime), we have O(n²).

Whenever you see algorithms with nested loops, you can always assume the time complexity is O(n²). O(n) inside of O(n), aka O(n *n) aka O(n²) means that as n grows, the runtime is also growing quadratically. Quadratic time complexity is considerably slow because of the nested iterations the algorithm has to run.

Using my simple analogy one last time, O(n²) represents going through that line of people one by one noting the color of their clothes, then gathering all the people wearing the same color. In the example below, we iterate through myArrwith nested loops to pair all the colors with each possible color combination according to the length of elements in that array.

O(n²)Example: 
let myArr = ["green", "blue", "purple", "green", "red"]
const allPossibleColorCombinations = (arr) => {
for (var i = 0; i < arr.length; i++){
for(var j = 0; j < arr.length; j++) {
console.log(arr[i],arr[j])
};
};
};
printAllPairs(myArr)
//=>green green
green blue
green purple
green yellow
green red
blue green
blue blue
blue purple
blue yellow
blue red
purple green
purple blue
purple purple
purple yellow
purple red
yellow green
yellow blue
yellow purple
yellow yellow
yellow red
red green
red blue
red purple
red yellow
red red

As you can imagine, the run time for this algorithm is much longer than the O(1) and O(n) given the sheer size of their outputs. Depending on the codebase you may be writing for, O(n²) algorithms may considerably slow down the runtime. For large applications like Facebook or Twitter, O(n²) algorithms is no bueno!

Next Steps

So there you have it, the three basic Big O Notations you can start implementing in your code!

Now that we have this knowledge, what are the next steps? Well, as a developer, we must think about problem-solving when we write code. Understanding the problem you are trying to solve will reflect appropriate problem-solving patterns back to you. Concepts such as bubble sort, recursion, and brute force are all different patterns of problem-solving. Luckily for us, these problem-solving patterns are tried and true, so we can compare our code for optimization. Learning about these concepts would be a good next step.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Vannida Lim
Vannida Lim

Written by Vannida Lim

//Flatiron School Alum && Software Engineer ⚛👩🏻‍💻

Write a response