# Big O Analysis and Programming

I remember interviewing with Facebook for a developer role a couple years ago and having to design a simple algorithm. It was going quite well until my interviewer asked me what the complexity of my algorithm was and I started to mumble incoherent words.

As a computer scientist and mathematician I was obviously familiar with the concept but I had never really understood it. I did try to study on the subject using online resources but they either very easily bored me or confused me even more.

So, using examples in Java, I decided to give writing a tutorial on the Big Analysis and Programming a go with the hope that I too won’t end up writing one that will confuse you more.

**The what:** The Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. It does this by describing the algorithms worst case scenario.

**The why:** The Big O notation is important because by showing you the worst case complexity of your code, it shows you how your code will scale with larger inputs.

**In English:** The Big O Notation helps you understand how long an algorithm or piece of code takes to complete. A faster algorithm is better than a slower algorithm.

There are 6 different Big O complexities or performance rates, listed in their decreasing speeds, they are:

1. O(1) – Constant Complexity

2. O(log n) – Logarithmic Complexity

3. O(n) – Linear Complexity

4. O(n log n) – Logarithmic + Linear Complexity

5. O(n^{2}) – Quadratic Complexity

6. O(2^{n}) – Exponential Complexity

A seventh complexity we will not look into is O(n!) representing Infinity complexity. This represents an algorithm that runs forever and never completes.

Before we look at these complexities in detail, it is important to understand exactly how to calculate the complexity of a code

### HOW TO CALCULATE COMPLEXITY OF CODE

1. Find the input or size of input

2. List the number of operations

3. Find how many times each operation will be executed for each input

4. Get the largest time from the previous step and simplify

5. Drop constants and non significant terms

Now lets have a look at these complexities in detail.

## O(1) – Constant Complexity

Constant complexity means that an algorithm will take the same amount of time no matter how big the size of data input is. If for an input of size 1 the time to complete the algorithm is 5 seconds, then for input of size 1000, it will still be 5 seconds.

**FOR EXAMPLE**

1. Finding out if a number is even or odd

2. Accessing an array

3. Inserting something into an array

**JAVA EXAMPLE**

**CALCULATING COMPLEXITY OF THIS EXAMPLE**

1. Find the input or size of input:

`int i`

2. List the number of operations:

`i==0`

3. Find how many times each operation will be executed for each input:

`i==0 is executed a total of 1 times.`

4. Get the largest time from the previous step and simplify:

`Still 1`

5. Drop constants and non significant terms:

`Still 1`

Hence, complexity is **O(1).**

## O(log n) – Logarithmic Complexity

Logarithmic complexity means that the speed of an algorithm increases with the data set but not proportionally. If data input of size 1 takes 1 second, data of size 10 may take 2 seconds and data size 100 takes 3 seconds and so on.

**Note:**Basic log rules state if b^{y} = x then log_{b}(x) = y and x/(b^{y}) = 1 For example when 2^{4} = 16 then log_{2}(16) = 4 and 16/(2^{4})=1

**FOR EXAMPLE**

1. Insertion to a binary search tree

2. Finding an item in a binary search tree

3. Calculating fibonacci numbers non recursively

**JAVA EXAMPLE**

**CALCULATING COMPLEXITY OF THIS EXAMPLE**

1. Find the input or size of input:

`int n`

2. List the number of operations:

```
n == 0
return 1
return 1 + isZero(n/2)
```

3. Find how many times each operation will be executed for each input:

```
n == 0 : n/2 times + 1 ( worst case)
return 1 : n/2 times (worst case)
return 1 + isZero(n/2) : n/2 times
```

4. Get the largest time from the previous step and simplify:

```
1 + n/2
According to logarithm rules stated earlier, 1 + n/2 is equal to
1 + log
```_{2}(n)

5. Drop constants and non significant terms:

`1 + log`_{2}(n) becomes
log_{2}(n)

Hence, complexity is **O(log n).**

## O(n) – Linear Complexity

Linear complexity means that the time to complete an algorithm scales linearly with the size of the input set. For example if your input size is 10 and this takes 1 second, when we double the input size to 20, it should take about 2 seconds.

**FOR EXAMPLE**

1. Deleting an element from an array

2. Counting the items in a linked list

3. Reading a book

**JAVA EXAMPLE**

**CALCULATING COMPLEXITY OF THIS EXAMPLE**

1. Find the input or size of input:

`the size of array a i.e a.length().`

2. List the number of operations:

```
i=0
i < a.length()
i++
a[i]==0
System.out.println(i)
```

3. Find how many times each operation will be executed for each input:

```
i=0 : 1 time
i < a.length() : a.length() + 1 times
i++ : a.length() times
a[i]==0 : a.length() times
System.out.println(i) : a.length() times
```

4. Get the largest time from the previous step and simplify:

```
a.length() + 1
Let a.length() = N
= N + 1
```

5: Drop constants and non significant terms:

`N + 1 becomes N`

Hence, complexity is **O(N).**

## O(n log n) - Logarithmic + Linear Complexity

This complexity means the time to complete increases by the number of items times the result of Log2(N). For example 1 item takes 2 seconds, 10 items takes 12 seconds, 100 items takes 103 seconds.

**FOR EXAMPLE**

1. Sorting a deck of cards with mergesort

2. Sorting an array with timsort

3. Sorting an array with heapsort

**JAVA EXAMPLE**

**CALCULATING COMPLEXITY OF THIS EXAMPLE**

1. Find the input or size of input:

`10`

2. List the number of operations:

```
int i = 0
i < 10
i++
int j = i
j > 0
j/=2
System.out.print(j+" ")
System.out.println()
```

3. Find how many times each operation will be executed for each input:

```
int i = 0 : 1 time
i < 10 : 10 + 1 times
i++ : 10 times
int j = i : 10 times
j > 0 : 10/2 * 10 times (worst case)
j/=2 : 10/2 * 10 times (worst case)
System.out.print(j+" ") : 10/2 * 10 times
System.out.println() : 10 times
```

4. Get the largest time from the previous step and simplify:

```
10/2 * 10
Let input 10 be N
N/2 * N
According to logarithm rules this is the same as
=N log
```_{2}(N)

5: Drop constants and non significant terms:

`Still N log`_{2}(N)

Hence, complexity is **O(n log n).**

## O(n^{2}) - Quadratic Complexity

Quadratic complexity means that the time to complete an algorithm is directly proportional to the square of the size of the input data set. For example if your input size is 1 and this takes 1 second, then if the size becomes 10, it will take 100 seconds and input of size 100 takes 10000 seconds!

**FOR EXAMPLE**

1. Sorting an array with insertion sort

2. Sorting an array with selection sort

3. Sorting an array with bubblesort

**JAVA EXAMPLE**

**CALCULATING COMPLEXITY OF THIS EXAMPLE**

1. Find the input or size of input:

`int n`

2. List the number of operations:

```
int count=0
int i=0
i < n
i+=2
count++
n <= 0
return count;
return 1 + aFunction(n-5)
```

3. Find how many times each operation will be executed for each input:

```
int count=0 : 1 + (n-5) times
int i=0 : 1 + (n-5) times
i < n : 1 + n/2 * (n-5) times
i+=2 : n/2 * (n-5) times
count++ : 1 + n/2 * (n-5) times
n <= 0 : 1 + (n-5) times
return count : 1 time
return 1 + aFunction(n-5) : (n-5)
```

4. Get the largest time from the previous step and simplify:

```
1 + n/2 * (n-5)
due to Asymptotic behavior, so the above equation summarises to
1 + (2n-10) * n
= 1 + 2n
```^{2}- 10n

5: Drop constants and non significant terms:

`1 + 2n`^{2}- 10n becomes 2n^{2}

Hence, complexity is **O(N ^{2}).**

## O(2^{n}) - Exponential Growth

Exponential complexity means that the time to complete an algorithm doubles with each additon to the input data set. For example if your input size is 1 and this takes 1 second, then if the size becomes 10, it will take 1024 seconds and input of size 20 takes 1048576 seconds!

**FOR EXAMPLE**

1. Breaking a password with brute force attacks

2. Solving the naive n-queens problem

3. The recursive factorial algorithm

**JAVA EXAMPLE**

**CALCULATING COMPLEXITY OF THIS EXAMPLE**

1. Find the input or size of input:

`int n`

2. List the number of operations:

```
n <= 1
return n
return Fibonacci(n - 2) + Fibonacci(n - 1)
```

3. Find how many times each operation will be executed for each input:

`n <= 1 : 2`^{(n-2)} + 2^{(n-1)} + 1 times
return n : 1 time
return Fibonacci(n - 2) + Fibonacci(n - 1) : 2^{(n-2)} + 2^{(n-1)} + 1 times

4. Get the largest time from the previous step and simplify:

`2`^{(n-2)} + 2^{(n-1)} + 1

5: Drop constants and non significant terms:

`2`^{(n-2)} + 2^{(n-1)} + 1 becomes 2^{(n)}

Hence, complexity is **O(2 ^{N}).**

During programming interviews, candidates are often asked to write an algorithm and find its complexity. Follow the 5 steps outlined in order to figure out how fast your code is.

You should try to make a habit of thinking about the time and space complexity of your code without compromising it's readability. With time this will become second nature, allowing you to see optimizations and potential performance issues right away.

What do you think about this tutorial on Big O? Is there something I missed? Or would you explain it differently? Leave your comments below!

Nice tutorial, what about an example on O(e^n)?

Thanks! I never thought of that one. O(e^n) complexity would be extremely slow in its worst case wouldn’t it? I will think of some examples and edit the post.