How efficient and scalable your program really is?
Big O, this is really an important topic to deal with. You can barely see an interview without this topic.
And this is a concept that will last for a long time in a developer’s world. Whether you are a senior developer who is coding for a long time or a beginner just getting started, this is an indispensable concept to know about.
Not only this helps in an interview but also plays a vital role in how you write programs.
So let me start by sharing a small experience which led myself in learning Data Structures & Algorithms in a practical approach.
A few days back, I participated in my first ever Google Code Jam: an online coding competition where you will be given a set of problems to find a solution. And I was just qualified for the First Round. There were 3 problems to solve within 2.30 hours. I was sure that I won’t be able to solve all but at least wanted to solve one. So I tried to solve one of them but most of my attempts resulted in a Time Limit Exceeded Error (There were some wrong answers too..).
And this was because of my solution not being in a proper algorithmic way.
What caused that TLE Error?
Well. The answer is simple. It was because of an imperfect Big O. I have been using so much nested loops which caused the program to run exponentially as the data set became larger and larger. And this is where I realized the importance of Data structures & Algorithms. If I had followed the right approach, I could have saved most of the time and also my program would have been perfect. Hence I started with learning Big O, the first step in mastering data structures & algorithms.
By the way, In the end, I didn’t even complete one of them in Round 1. LOL :)
Enough of my stuff. let’s begin. But before we start, I want to ask you a question.
How will you define a Good Code?
Say there are 2 friends Tim & Tom where both of them have a solution to a problem. In this case, how can we know whose code is better? The answer is, there are 2 things that make a code good, better and best. They are
In this article, we will discuss scalability as Big O falls into this category. So how do we measure scalability? To understand that better, let’s say we have a function which finds earth in an array of planets.
For the above function, let’s measure its scalability in terms of time complexity by knowing how much time it took to run.
Note that scalability is measured in terms of both Time Complexity & Space Complexity. But for now, let’s stick with time complexity alone.
But what if there are more than 10,000 elements in the array. The time would increase but not that significantly, right?. Computers are very fast today. But there may be a slight difference in the time every time you run the program.
And that depends entirely on how fast is your computer’s CPU, what programs are already in use etc..
So if we were to apply this methodology to Tim and Tom’s code, can we say that Tim’s code(4 milliseconds) is better than Tom’s(6 milliseconds) since it took 2 milliseconds lesser to run? No, that’s not a correct way to judge. As said earlier, the time may vary depending on other external factors like what CPU you use, what are the other programs that are running simultaneously? etc..
Okay. then how do we measure it?
The answer is Big O.
So what is Big O?
Big O is a way to measure how efficient your program is. As said earlier everyone can write a solution to a problem provided sufficient time like Tim and Tom, right? So the thing here is, how well our program performs during a runtime. Does it become complex on increasing the input? And this is what most of the top tech giants like Google, Amazon, etc will look for.
So Big O is the language we use to define how long does a function/algorithm take to run.
We can use Big O to compare two different algorithms like Tim and Tom’s code & say which one is better regardless of their computer’s performance. And an algorithm can have the following different Big O’s.
This may seem complex but it becomes easier to understand as we go further.
From the chart above, you can see the number of operations increases significantly on increasing the elements in some cases [O(n!), O(2^n), O(n²)]. And the number of operations remains almost the same in some cases [O(log n), O(1), etc.].
And this is how we define a good code in terms of time complexity using Big O. We definitely don’t want our algorithm/function to stay in the red area as you can see it’s horrible.
Whenever we define scalability in terms of time complexity using Big O, we simply mean,
When the input grows bigger and bigger, how much our algorithm slows down. The lesser it slow’s down, the better it is like the green & yellow regions.
So, from now on, instead of using performance.now() to calculate the time, we will use Big O to measure the number of steps an algorithm needs as the input increases.
And we will learn how to calculate Big O?, what are O(n), O(1), etc? in the next article as this is already pretty long. I don’t wanna stuff too much in a single article. So let’s get back in the next article :)
By the way, all this knowledge came from Andrei. A very good instructor of mine and many other students. Any claps here are dedicated to him :)