Big-O notation basics for web developers

What is the Big-O notation?

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
And is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.

Let's break that down:

- how quickly the runtime grows
It's hard to pin down the exact runtime of an algorithm. It depends on the speed of the processor, what else the computer is running, etc. So instead of talking about the runtime directly, we use big O notation to talk about how quickly the runtime grows.
- relative to the input
If we were measuring our runtime directly, we could express our speed in seconds. Since we're measuring how quickly our runtime grows, we need to express our speed in terms of...something else. With Big O notation, we use the size of the input, which we call "." So we can say things like the runtime grows "on the order of the size of the input " () or " on the order of the square of the size of the input " (O(n^2)
- as the input gets arbitrarily large
Our algorithm may have steps that seem expensive when is small but are eclipsed eventually by other steps as gets huge. For big O analysis, we care most about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as gets very large. (If you know what an asymptote is, you might see why "big O analysis" is sometimes called "asymptotic analysis.")

What is Time complexity?

the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.
Therefore, the time complexity is commonly expressed using big O notation, typically      etc., where n is the input size in units of bits needed to represent the input.

Big-O Complexity Chart (bigocheatsheet.com)

Constant-Time Algorithm: O(1) — “Order 1”

On this order, regardless of the complexity (number of items), the time (iterations) is constant.
Example code:
const getLast = items =>
     items[items.length-1];

Logarithmic-Time Algorithm: O(log n) — “Order log N”

These are the holy grail of search/sort algorithms, they are usually the most efficient approach when dealing with large collections. Instead of looking through the components one by one, they split the data in chunks and discard a large amount on every iteration, usually the half, or log base 2.
Example code:
const quickSort = list => {
     if (list.length < 2)
          return list;
     let pivot = list[0];
     let left  = [];
     let right = [];
     for (let i = 1, total = list.length; i < total; i++){
          if (list[i] < pivot)
               left.push(list[i]);
          else
               right.push(list[i]);
     }
     return [
          ...quickSort(left),
          pivot,
          ...quickSort(right)
     ];
};

Linear-Time Algorithm: O(N) — “Order N”

In this order, the worst case time (iterations) grows on par with the number of items and describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.
Example code:
const findIndex = (items, match) => {
     for (let i = 0, total = items.length; i < total; i++)
          if (items[i] == match)
               return i;
     return -1;
};

Quadratic-Time Algorithm: O(N^2) — “Order N squared”

For this kind of order, the worst case time (iterations) is the square of the number of inputs. The time grows exponentially related to the number of inputs and performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set.
Example code:
const buildSquareMatrix = items => {
     let matrix = [];
     for (let i = 0, total = items.length; i < total; i++){
          matrix[i] = [];
          for (let j = 0, total = items.length; j < total; j++)
               matrix[i].push(items[j]);
     }
     return matrix;
};

Exponential-Time Algorithm: O(2^N) — “Order N squared”

For this kind of order, the growth doubles with each additon to the input data set. The growth curve of an O(2^N) function is exponential - starting off very shallow, then rising meteorically.
Example code:
int Fibonacci(int number) {
     if (number <= 1) return number;
     return Fibonacci(number - 2) + Fibonacci(number - 1);
}
(wikipedia.org & rob-bell.net & interviewcake.com & bigocheatsheet.com & medium.com)

What is Linear search?

Linear search is the simplest and least performant searching algorithm we’ll cover. Literally, all it is is loop over the array until you find what you’re looking for and in the worst case the computer will have to examine every single item in the array.
Example:
function linearSearch(array, toFind){
     for(let i = 0; i < array.length; i++){
          if(array[i] === toFind) return i;
     }
     return -1;
}
In the worst case scenario we have to look at every item in the array.
That means it would run in O(n) complexity.
(medium.com)

What is Binary search?

A binary search works by finding the middle element of a sorted array and comparing it to your target element. Based on what you find, you either use the left or right half of the array and update the start and ending indexes until you either do or do not find the element. The list of data must be in a sorted order for it to work.
Example:
function binarySearch(arr, target) {
     let left = 0;
     let right = arr.length - 1;
     while (left <= right) {
          const mid = left + Math.floor((right - left) / 2);
          if (arr[mid] === target) {
               return mid;
          }
          if (arr[mid] < target) {
               left = mid + 1;
          } else {
               right = mid - 1;
          }
     }
     return -1;
}
Binary search is more efficient than linear search.
That means it would run in O(log n) complexity.
(medium.com)

What does it mean?

It means that for a list of 100 item, linear search would take 100 comparison whereas the binary search will take only 7 ( log(100) = 6. ..~7).
For a list of 100,000,000 item. Linear search will take 100,000,000 comparison and binary search will take only 27 comparison.
(quora.com)

What is the Fibonacci sequence?

The Fibonacci sequence is a series of numbers where a number is found by adding up the two numbers before it. Starting with 0 and 1, the sequence goes 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 and so forth.
(livescience.com)

There are many ways to calculate the Fibonacci sequence. In my example I have used the "recursive" and the "for loop" way. If you use the Big-O notation on the two different functions your will see that the "recursive" way run 453 times to get the the Fibonacci sequence of  10 and  the "for loop" way run 11 times to get the the  Fibonacci sequence of  10.
If you compare it to the Big-O notation sheet, you will see that "recursive" is not efficient and the "for loop" is much better solution to the problem.

Example Javascript code (recursive):

var myValue = 10;
var runTimes = 0;

function fibonacci(num) {
    runTimes += 1;

    if (num <= 1) return num;

    return fibonacci(num - 1) + fibonacci(num - 2);
}

for (var count = 0; count <= myValue; count++) {
    console.log("Fibonacci of " + count + " is " + fibonacci(count) + " (Called Function " + runTimes + " times to get answer)");
}
Output:
Fibonacci of 0 is 0 (Called Function 1 times to get answer)
Fibonacci of 1 is 1 (Called Function 2 times to get answer)
Fibonacci of 2 is 1 (Called Function 5 times to get answer)
Fibonacci of 3 is 2 (Called Function 10 times to get answer)
Fibonacci of 4 is 3 (Called Function 19 times to get answer)
Fibonacci of 5 is 5 (Called Function 34 times to get answer)
Fibonacci of 6 is 8 (Called Function 59 times to get answer)
Fibonacci of 7 is 13 (Called Function 100 times to get answer)
Fibonacci of 8 is 21 (Called Function 167 times to get answer)
Fibonacci of 9 is 34 (Called Function 276 times to get answer)
Fibonacci of 10 is 55 (Called Function 453 times to get answer)

Example Javascript code (for loop):

var myValue = 11;
var runTimes = 0;

function fibonacci(num) {
    runTimes += 1;
    if (num === 0) return 0;

    var myList = [1, 1];

    for (var i = 2; i < num; i++) {
        myList[i] = myList[i - 1]+ myList[i - 2];
    }
    //console.log(myList);    return myList[num - 1];
}

for (var count = 0; count <= myValue; count++) {
    console.log("Fibonacci of " + count + " is " + fibonacci(count) + " (Called Function " + runTimes + " times to get answer)");
}
Output:
Fibonacci of 0 is 0 (Called Function 1 times to get answer)
Fibonacci of 1 is 1 (Called Function 2 times to get answer)
Fibonacci of 2 is 1 (Called Function 3 times to get answer)
Fibonacci of 3 is 2 (Called Function 4 times to get answer)
Fibonacci of 4 is 3 (Called Function 5 times to get answer)
Fibonacci of 5 is 5 (Called Function 6 times to get answer)
Fibonacci of 6 is 8 (Called Function 7 times to get answer)
Fibonacci of 7 is 13 (Called Function 8 times to get answer)
Fibonacci of 8 is 21 (Called Function 9 times to get answer)
Fibonacci of 9 is 34 (Called Function 10 times to get answer)
Fibonacci of 10 is 55 (Called Function 11 times to get answer)

Comments

Popular posts from this blog

Concurrency in Web Development