DISCUSS HOW ASYMPTOTIC ANALYSIS CAN BE USED TO ASSESS THE EFFECTIVENESS OF AN

Một phần của tài liệu ASM 2 ALgorithm and Data Structure FPT GREENWICH BTECH DISTINCTION (SUPER HOT SALE) (Trang 26 - 31)

Asymptotic analysis refers to the method of estimating an algorithm's time complexity in computational units in order to figure out the program's limits, also widely recognized as "run-time performance." The intention is to determine the best case, worst case, and average case times for completing a specific task.

2. Asymptotic Notations And How They Relate To Ideas Of Best, Average And Worst Case:

 Big-O notation: The Big-O notation represents a program's worst-case execution time. We determine an algorithm's Big-O by estimating how many iterations it will take in the worst-case situation with an input of N. We usually consult the Big-O because we must always plan for the worst-case scenario. For example, O(log n) specifies the Big-O of a binary search algorithm.

 Big- Ω (Omega) represents a program's best running time. We measure the big-Ω by figuring out the number of iterations a program will perform in the best-case situation given an input of N. A Bubble Sort algorithm, for example, has a time complexity of Ω(N) since the list is already sorted in the best-case scenario, and the bubble sort will terminate after the first iteration.

 Big – Θ Theta notation serves to define the average bound of a program in terms of Time Complexity. That is, Big – Θ Theta notation always depicts the average time required by an algorithm for all input parameters. That is, Big-Theta notation indicates the average case of an algorithm's temporal complexity.

 The "little" o notation is used significantly less frequently in complexity analysis. Little o is stronger than big O (Chaitanya, 2021); while O suggests no quicker development, o signals absolutely slower growth. In contrast, denotes a strictly quicker growth.

To discuss the asymptotic analysis and assess the effectiveness of an algorithm, here are some examples of algorithms and their asymptotic analysis:

a) Example: Assess The Effectiveness Of Merge Sort Algorithm:

For example, Merge sort is a pretty fast sorting algorithm which has a time complexity of O(n*log n), it is the algorithm which is basically independent of the status of the input(no matter how the status of input is, the time complexity is not changing).

The working mechanism of merge sort is dividing the number of elements into half in every step.

Because the number is divided in every step, it is the logarithmic mechanism in which log n and each step of dividing the array/list can be demonstrated by log n + 1(at most).

Page 27 of 35 When the merge sort takes the middle of any array and subarray to divide them, it takes O(1) because this task only is calculating the middle index. And after dividing and sorting, the merging task will take O(n) because it needs to merge N elements.

Hence, Via all the tasks, the time complexity of merge sort will take O(n*(logn + 1)).

Based on the asymptotic annotations in the previous section, 3 cases of merge sort could be demonstrated: Best case[big-Ω ]: Ω(n*log n); Average case[big-Θ]: Θ(n*log n) and worst case[Big- O]: O(n*logn).

b) Example: Assess The Effectiveness Of the Selection Sort Algorithm:

For the second example, the selection sort is a simple implementation but a low-performance sorting algorithm. Because of the number of the nested loop(2), it is an O(n2) algorithm which has a quadratic worst case.

Selection sort iterates over each remaining element in the array for each element in the

array(two nested loops). Because there are n items and it executes about n operations on each element, the time complexity is O(n2).

Although each inner loop isn’t looping via the entire array (of size n), it’s looping via a quantity that’s linearly proportional to the size of the array. So, the time complexity of each inner loop is estimated as O(n). It partly means that no matter how the status of input is, the selection sort still takes O(n2) time complexity. Based on the asymptotic annotations in the previous section, I could analyse the complexity of this algorithm:

Best Case When there is no need for sorting, i.e. the array has already been sorted, complexity ensues. Best-case[big-Ω]: Ω(n2).

Average Case Complexity comes when the array index is messing. Average-case[big-Θ]:Θ(n2).

Worst-case complexity comes when the array index is in reverse order. Worst-case[Big-O]:O(n2).

c) Example: Assess The Effectiveness Of the Insertion Sort Algorithm:

The last example is the insertion sort algoritm – one of the simple-implement algorithms but has a pretty low performance. With n input size of elements to be sorted, and applying two nested loops that means when using this algorithm, we need to deal with quadratic effort or O(n²) time complexity except in the best case. This also is the case if the total number of items in both the outer and inner loops adds up to a value that does so linearly.

In the best-case scenario, the program discovers the insertion position at the top element with one comparison(It means each time taking the current element to compare with the sorted side, the current element is already located in the right place, no need to swap. Simply put, this case refers to the list input which is already almost sorted), so there are 1+1+1+ (n times) = O(n) time complexity for the best case. The average and the worst case are also estimated with O(n2) because except for the best case, any other cases in which the program also needs to execute two nested loops even when the half list is already sorted, the program will run approximately n2 time.Based

Page 28 of 35 on the asymptotic annotations in the previous section, I could analyse the complexity of this algorithm:

Best-case[big-Ω]: When the list is already sorted: Ω(n)

Average-case[big-Θ]: When the program needs to perform a traversal via half-three-fourths of the list: Θ(n2)

Worst-case[big-0]: When the list must be fully traversed(or in reverse order): 0(n2);

3. Some Examples to Clarify O(1), O(n), O(N log N) a) O(1) – constant time

O(1) signifies an algorithm that always executes in the same amount of time (or space), regardless of the size of the input data set (Woltmann, 2020).

For example, in merge sort, one of the crucial steps is taking the divided index by calculating the middle index of an array:

No matter how the status input is and how many elements of input(1000 or 10000 or even 1 million or more), the complexity of this calculation is always O(1) because it only needs to perform one calculation(sum and divide)

b) O(n) – linear time

O(n) indicates that the complexity grows linearly with the number of elements n because it commonly belongs to algorithms that perform traversal over every element in a collection: If n doubles, then the time approximately doubles, too.

For example, to remove the last node from the available singly linked list, the program needs to perform a traversal from the head pointer to the previous node before the last node via the next pointer of each one.

Page 29 of 35 Although in real implementation, it actual perform (n-1) traversal times, when applying the big O notation, this is only considered the worst term and it is O(n). Even when the actual times are (n- 2) and even (n-3) the appropriate complexity is still O(n).

c) O(n2) – Quandratic time

The function O(n2) indicates a function whose complexity is proportional to the square of the input size. Increasing the complexity by adding additional nested iterations across the input.

Bubble sort is the best example to illustrate the Quadratic time complexity

The time complexity required to solve the issue grows significantly. In the average and worst case, this algorithm has two stacked loops and a quadratic running time: O (n2).

d) O(logn) – Logarithmic time

O(log n) denotes an algorithm whose complexity grew logarithmically as the input size increased.

As a direct consequence, O(log n) algorithms scale quite well, and taking bigger inputs is far less likely to create performance issues. Simply put, Logarithmic time complexities are commonly associated with algorithms that divide problems in half every time (Newton, 2017).

Here is an example of O(logn) – the isContain() operation in my doubly linked list

Page 30 of 35 This operation uses a binary search algorithm to determine if the input list includes a checkeddata argument. In simple terms, it separates the doubly linked list in halves on each loop until the node data is discovered or the final member is read.

e) O(N log N) – Linearithmic

O(N log N) denotes that logn operations will take place n times. O(N log N) time is prevalent in recursive sorting algorithms, binary tree sorting algorithms, and most other forms of sorts. Any method that employs O(nlogn) space will almost certainly be noticeable.

The familiar example of O(nlogn) is quick sort:

Simply put, Quick sort apply Divide and Conquer in the partition mechanism and this process take O(logn) for each time. However, to divide and conquer on all elements of array, it need to re- perform this task n times. So, in the best case and average case, quick sort take O(nlogn) complexity for the sorting process.

f) O(2n) – Exponential time

This complexity denotes that algorithm double every time the input grows the size. For example, the Fibonacci algorithm has two implementation ways. In the worst implement way of Fibonacci, the number of recusize and calculations is doubled every time the Fibonacci number is increased(even only 1 unit). This complexity is always avoided when implementing an algorithm.

g) O(n!) – Factorial time

O(n!) is the sum of all positive integer integers less than one (Woltmann, 2020). It is the "worst"

available complexity. For instance, poker cards have 52 cards, with 52! different orderings of cards after shuffling. This will create an uncountable number. So, this is unnecessary to take the

Page 31 of 35 algorithm code example for this one because no algorithm reaches this complexity(If one algorithm reaches O(n!), it will not be considered an algorithm anymore).

Một phần của tài liệu ASM 2 ALgorithm and Data Structure FPT GREENWICH BTECH DISTINCTION (SUPER HOT SALE) (Trang 26 - 31)

Tải bản đầy đủ (PDF)

(35 trang)