Big O Notation is one of the most important topics for a developer to understand in order to become a good programmer. Let us understand in this way that, you and your developer friend designed two different algorithms to solve a particular problem. Both algorithms are not only different in naming variables but have significantly different approaches to solve the same problem.

So how would they determine which approach is best? That’s what really Big O all about. It is a kind of system which helps in comparing and checking the performance of two algorithm.

Table of Contents

## Asymptotic Notations

In a group of 4 students, they were given a string and were assigned to reverse it. So everyone came up with different solutions like,

Student 1

function reverse(s) { var o = ''; for (var i = s.length - 1; i >= 0; i--) o += s[i]; return o; }

Student 2

function reverse(s) { var o = []; for (var i = s.length - 1, j = 0; i >= 0; i--, j++) o[j] = s[i]; return o.join(''); }

Student 3

function reverse(s) { var o = []; for (var i = 0, len = s.length; i <= len; i++) o.push(s.charAt(len - i)); return o.join(''); }

Student 4

function reverse(s) { return s.split('').reverse().join(''); }

Each piece of code solves one problem but has totally different approaches. So wouldn’t be it nice if we had a sophisticated system to compare them and classifying it into different sections depending upon their performance.

Asymptotic notations are mathematical system used to analyze the running time of algorithm for given input in a particular case.

Also read, Difference between var vs let vs const in JavaScript | TDZ

### Why asymptotic notations is important?

The efficiency of any algorithm is measured by the amount of time taken, storage, and other required conditions to execute the algorithm. The algorithm may not have the same performance for every input data, as the input data increase it directly affects the performance of the algorithm. Thus to solve this problem we use Asymptotic notations to categorize the algorithm.

Study of change of algorithm performance with the change of input data size is called **Asymptotic Analysis**. It helps in understanding that if our code slows down or crashes, we can identify the faulty areas and reconstruct our approach.

### Types of Asymptotic Notations

There are mainly three types of Asymptotic Notations

#### Big-O Notation (O-notation)

Big-O notation is used to measure the worst-case scenarios of the running time of the algorithm. It is used to describe the asymptotic upper bond and mathematically it is represented as,

```
O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
```

A function f(n) is said to be O(gn) if there exists a positive constant c and n_{o} such that O ≤ f(n) ≤ c g(n) for all n ≥ n_{o}. In general terms, function f can be any equation but when we do the analysis of an algorithm, then f(n) constitutes time, where n is input size.

#### Omega Notation (Ω-notation)

Omega Notation is used to measure the best-case scenarios of the running time of the algorithm. It is used to describe the asymptotic lower bond and mathematically it is represented as,

```
Ω(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
```

A function f(n) is said to be Ω(gn) if there exists a positive constant c and n_{o} such that O ≤ c g(x) ≤ f(x) for all n ≥ n_{o}.

#### Theta Notation (Θ-notation)

Theta Notation is used to measure the average-case scenarios of the running time of the algorithm. It is used to describe the asymptotic upper and lower bond, and mathematically it is represented as,

```
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
```

A function f(n) is said to be Θ(gn) if there exist positive constants c1 and c2 such that it can be sandwiched between c1g(n) and c2g(n) for all n ≥ n_{o}.

Also read, Is There a Relationship Between a Technical Interview and Employee Performance?

### Difference between Big-O, Omega and Theta Notations

Big-O | Omega | Theta |
---|---|---|

It describes the upper bound of the running time of an algorithm. | It describes the lower bound of the running time of an algorithm. | It describes the tight (most accurate) upper and lower bound of the running time of an algorithm. |

It is used to measure the longest possible time for an algorithm to complete with the given input data size. | It is used to measure the smallest possible time for an algorithm to complete with the given input data size. | It is used to measure the shortest time compared to O and Ω for an algorithm to complete with the given input data size. |

It is represented by O-notation | It is represented by Ω-notation | It is represented by Θ-notation |

Worst Case | Best Case | Average Case |

O ≤ f(n) ≤ c g(n) for all n ≥ n_{o} | O ≤ c g(x) ≤ f(x) for all n ≥ n_{o} | O ≤ C2g(n) ≤ f(n) ≤ C1g(n) for n ≥ n_{o} |

### Which of the asymptotic notation is used more?

There can’t be any specific answer to this, usually, it depends on different cases and uses. Generally, Theta Notation gives a better picture of the running time of a given algorithm and in case of interview purpose, the interviewer expects you to provide an answer with respect to theta notation when they ask in ‘order of’.

A lot of people also argue that Big-O notation is also one of the best choices for determining the run time of a given algorithm, as it is used to describe the upper bond, in simple language, Big-O gives the worst case of the algorithm, so there is no other complexity can be worse than that.

## Time Complexity

Let suppose we want to write a function to add the sum of all numbers from 1 to n, where n is a number given by the user, so we write two algorithms to execute the given problem.

function addNumbers(n) { let sum = 0; for(i = 1; i <= n; i++){ sum = sum + i; } return sum; } let result = addNumbers(55); console.log(result)

and, second one

function addNumbers(n) { return n * ( n + 1 ) / 2; } let result = addNumbers(55); console.log(result)

So when we send a number, for example, 55, we get 1540 from both the algorithms with almost no difference in the running time, but what if we send 100000000, then we can see the significant amount of difference between the running time of both the algorithm.

Time complexity depends on a lot of factors like execution time, hardware, network, and power. However, we don’t consider any of them except execution time while analyzing the algorithm. Execution time refers to the total time taken by the algorithm to execute the number of operations of the code, usually depending upon the data size of the input. But, if an algorithm takes T1 time to execute the code, it does not means T1 is the time complexity as time changes to T2 once the input data sizes vary.

Time complexity actually gives you information of the variation (increase or decrease) in the execution time of the given algorithm when the number of steps or operations (increases or decreases) depending upon the input data size.

**Note** – A fast Algorithm is one that performs a few operations to execute code, so if the number of operations grows to infinity, then the algorithm tends to get slower.

Also read, How does JavaScript works ? A detail explanation of JavaScript’s Engine Working

### What Asymptotic notation is used to define the Time complexity

For analyzing the time complexity of an algorithm, we mostly use Big-O notation because it gives an upper bound limit of the execution time. In simple words, Big-O gives the worst-case scenario of the time complexity of the given algorithm.

To compute the Big-O notation of the algorithm, we ignore the lower order terms, as lower terms are insignificant for large input data size and also we assume every statement consumes 1 unit of time. Let us understand with few examples,

#### Example 1

const square = (a) => { return a*a }

Let see how many times operations are executed,

**step 1** – It will run only one time, independent of the input data size as only one operation is executing.

**step 2** – f(n) = 1

**step 3** – O(f(n)) = O(1), said to have constant time

#### Example 2

const sum = (arr1,n) => { let totalSum = 0; for(let i = 0; i < n; i++){ totalSum = totalSum + arr1[i] } return totalSum; }

Let see how many times operations are executed,

**step 1** – Initialization of variable totalSum takes 1 unit time,

**step 2** – Then we enter into loop, when i = 0, it satisfies the condition, totalSum is added

**step 3** – when i = 1, it satisfies the condition, totalSum is added

**step 4** – when i = 2, it satisfies the condition, totalSum is added

**step 5** – The loop will run till the condition is false, so the total Sum statement will run equal to 0, 1, 2, 3, 4, 5 ….., n, It will run n times and for loop will run n+1 times, till the condition get false.

**step 6** – return of totalSum will take 1 unit time

**step 7** – f(n) = 1 + (n + 1) + n + 1 => 2n + 3

**step 8** – O(f(n)) = O(n), we ignore the constant and lower terms

#### Example 3

const sum = (arr1, arr2, n) => { let totalSum = 0; for(let i = 0; i < n; i++){ for(let j = 0; j < n; j++){ totalSum = totalSum + arr1[i] + arr2[j]; } } return totalSum;

Let see how many times operations are executed,

**step 1** – Initialization of variable totalSum takes 1 unit time,

**step 2** – Then we enter into a loop, when i = 0, it satisfies the condition, another loop is run, and when j = 0, it satisfies the inner loop condition, totalSum is added

**step 3** – when i = 1, it satisfies the condition, then the inner loop run again till the condition get false, totalSum is added according to the number of times j loop run

**step 4** – Likewise, So if outer loop i run, n+1 times, then inside it runs n times, and if loop j runs n(n+1) times then total Sum statement will run equal to n*n times.

**step 5** – return of totalSum will take 1 unit time

**step 6** – f(n) = 1 + (n + 1) + (n(n + 1) + n*n) => 2n^{2} + 2n + 3

**step 7** – O(f(n)) = O(n^{2}), we ignore the constant and lower terms

#### Example 4

const sum = (n) => { let totalSum = 0; for(let i = 1; i < n; i=i*2){ totalSum = totalSum + i; } return totalSum; }

Let see how many times operations are executed,

**step 1** – Initialization of variable totalSum takes 1 unit time,

**step 2** – Then we enter into loop, when i = 1, it satisfies the condition, totalSum is added

**step 3** – when i = 2, it satisfies the condition, totalSum is added

**step 4** – when i = 4, it satisfies the condition, totalSum is added

**step 5** – when i = 8, it satisfies the condition, totalSum is added

**step 6** – The loop will run till the condition is false, so the value of i would be at every iteration 1, 2, 2^{2}, 2^{3}, 2^{4}, 2^{5} ….. + 2^{k}, It will run till 2k ≥ n.

**step 7** – return of totalSum will take 1 unit time

**step 8** – Let assume, it will run till 2^{k} = n condition. So, 2^{k} = n => k = log_{2}n

**step 9** – f(n) = 1 + log_{2}n + 1 => log_{2}n + 2

**step 10** – O(f(n)) = O(log_{2}n), we ignore the constant and lower terms

### Different types of time complexities used

Table below is the big-O cheatsheet to help you in analyzing your algorithm and understand how your algorithm’s performance will be according to the data input size and how much time it will take to execute.

Big-O Notation functions | Name | Examples | Fun way to remember |

O(1) | Constant | checking a number is odd or even | O(Yeah) 🤩 |

O(log n) | Logarithmic | finding element using Binary search | O(nice) 😋 |

O(n) | Linear | finding element using Selection search | O(just fine) 😀 |

O(n log n) | Linearithmic | sorting element using merge sort | O(ok) 🙂 |

O(n^{2}) | Quadratic | sorting element using bubble sort | O(no) 🙁 |

O(2^{n}) | Exponential | recursive calculation of Fibonacci numbers | O(Omg!) 😭 |

O(n!) | Factorial | traveling salesman problem through brute-force search | O(fuck) 🤯 |

Here is a graph of different time complexity functions,

## Space Complexity

Space complexity is said to be the total storage taken by the algorithm to execute including the storage of the input data size. It is very necessary to understand that space complexity is said to have both auxiliary and space used by the input data. But a lot of people confuse, space complexity with auxiliary space. Auxiliary space is just a temporary space.

Space complexity also helps in determining the efficiency of the given algorithm. If your algorithm has the worst time complexity, it might execute with more execution time but if your algorithm takes more space above the limit, then the compiler will not run the program in the first place itself.

To compute space complexity we need to add all the integer initialized in program and given input data size.

### Example 1

const sum = () => { let a = 3; let b = 7; let c = 3+7; return c; }

Let assume that each variable take 1 unit of space,

**step 1** – list all the variable in the program, a, b, c.

**step 2** – add all the variable, S(n) = 1 + 1 + 1 and there is not input data

**step 3** – O(S(n)) = O(1), space complexity for above given algorithm is constant.

### Example 2

const sum = (arr1,n) => { let totalSum = 0; for(let i = 0; i < n; i++){ totalSum = totalSum + arr1[i] } return totalSum; }

Let assume that each variable take 1 unit of space,

**step 1** – list all the variable in the program, arr1, n, totalSum and i.

**step 2** – add all the variable, S(n) = n + 1 + 1 + 1 => n + 3

**step 3** – O(S(n)) = O(n), space complexity for above given algorithm is linear.

**Note** – A fast Algorithm or best algorithm is said to have the very least space complexity. The lesser the memory space it takes, the more efficient the algorithm gets.

Also read, Programming Languages used at FAANG/ FAAMG

## Final Words

I hope you like this article and if this blog helped you then please share more with your friends and family. I am trying to create a blog series on “Understanding Data Structure and algorithm with JavaScript 🚀”, so bookmark this website to get more awesome articles like this.