...

Thursday, May 23, 2013

Computer Science 07

The oldest kind of list in the world is the Linked List. In a Linked List, each element has a pointer to the next element in the list. In this case, finding the ith element is O(i) as it has to traverse to the next item to find the next, and so on. In a system like this, Binary Search doesn't work very well.

In Python, however, lists are differently implemented in a system of pointers. Let us suppose that a pointer in a list occupies 4 units of memory. The definition of unit is arbitrary, as is the number 4 arbitrarily chosen. To get to the ith object of a list of 'int's, we access the object  pointed by the Pointer stored in:

start+(i*n)

We know exactly where to get the pointer to get to the actual object, instead of having to traverse each of them individually till we find it. In Python's implementation, and in fact all OOP (Object-Oriented Programming) languages, the lists require constant time to reach, which is defined as Q(1).

This implementation is called indirection. It is known that most problems we look at can be solved by adding layers of indirection. The only kind of problems that cannot be solved with indirection is when the levels of indirection is too great that it uses up the available memory.

Binary Search is a very powerful tool, but it works only when the assumption that the list is sorted is true. In order to perform Binary Search, the process is as shown:
1) Sort L
2) Perform Binary Search on L

We know that Binary Search is definitely O(log len(L)). In order for the above to make sense, we have to make sure that Sort L takes less than O(len(L)). However, there currently exists no sorting algorithm that could sort an element in sub-linear time. Therefore, if a list is unsorted, we will always take at least O(len(L))+O(log len(L)) time.

However, there are still benefits to using Binary Search after sorting an unsorted list, as opposed to Linear Search. We are interested in what's called the Anortized Complexity. The idea here is that if we can sort the list once, we can perform the search many times. Therefore, if the number of searches performed on the list is a great number, the Order of Growth is:

lim k->inf (O(len(L))+k*O(log len(L))/k)
= k*O(log len(L))/k
= O(log len(L))


Obama and Bubble Sort

A very simple sort is the Selection Sort. It is not a very good way to sort, but it is a useful method. In Selection Sort, like most other algorithms, it depends upon maintaining an Invariant. An Invariant is something that is invariantly true. The invariant we are going to maintain is a pointer into the list, which divides the list into a Prefix and a Suffix. The requirement is that the Prefix is assumed to be always sorted. This requirement is known as the Invariant. We'll start with an empty Prefix. Each step through the Algorithm, we decrease the size of the Suffix by one element, while increasing the Prefix by one element. We'll be done when the size of Suffix is 0.

In Selection Sort, suppose that we have the following objects in the list:
| 3 5 6 7 8 5 4 3 2

The | is the invariant pointer. The element looks through everything at the right side (the Suffix) and swaps the smallest element with the next element, then increments the invariant pointer:

2 | 5 6 7 8 5 4 3 3

It then looks for the next smallest element and swaps it with the next element:

2 3 | 6 7 8 5 4 5 3

This will continue on as shown:

2 3 3 | 7 8 5 4 5 6
2 3 3 4 | 8 5 7 5 6
2 3 3 4 5 | 8 7 5 6
2 3 3 4 5 5 | 7 8 6
2 3 3 4 5 5 6 | 8 7
2 3 3 4 5 5 6 7 | 8
2 3 3 4 5 5 6 7 8 |

The following code implements a simple Selection Sort:

def selectionSort(L):
    invariant=0
    minValue=L[invariant]
    minPointer=invariant
    for i in range(len(L)):
        for i in range(invariant,len(L)):
            if (L[i] < minValue):
                minValue=L[i]
                minPointer=i
        temp=L[invariant]
        L[invariant]=minValue
        L[minPointer]=temp
        invariant+=1
        if (invariant

            minValue=L[invariant]
            minPointer=invariant
    return L

L=[7,4,3,7,3,5,2,6,5,3,1]
print(selectionSort(L))


In this case, here's the breakdown of the list as it loops through the steps:
[1, 4, 3, 7, 3, 5, 2, 6, 5, 3, 7]
[1, 2, 3, 7, 3, 5, 4, 6, 5, 3, 7]
[1, 2, 3, 7, 3, 5, 4, 6, 5, 3, 7]
[1, 2, 3, 3, 7, 5, 4, 6, 5, 3, 7]
[1, 2, 3, 3, 3, 5, 4, 6, 5, 7, 7]
[1, 2, 3, 3, 3, 4, 5, 6, 5, 7, 7]
[1, 2, 3, 3, 3, 4, 5, 6, 5, 7, 7]
[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 7]
[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 7]
[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 7]

For a problem of length 11, we actually went through the middle loop 66 times. If the list is length 12, it would go through 78 times. The order of growth for this problem is:
O((n(n+1)/2))

If we take away any multiplicative and additive constants, we get:
O(n2)

For some time, people are actually unsure if we could sort this list better. This is a form of the Divide and Conquer algorithm. For a Divide and Conquer algorithm, we:
1) Choose a Threshold Input Size, which is n0 which is the smallest problem
2) How much are we going to divide the problem
3) Combine the sub-solutions back into the main solution

John Von Neuman observed that given two sorted list, we can merge them quickly. Here's an example of two lists:
1=[1,3,5,6,9]
2=[1,7,8,10,11,12]
3=[]

We first compare the first elements of each side. We would take smaller (or first, if both are the same) and put it to the merged list like this:
1=[3,5,6,9]
2=[1,7,8,10,11,12]
3=[1]

The process goes on like this:

1=[3,5,6,9]
2=[7,8,10,11,12]
3=[1,1]

1=[5,6,9]
2=[7,8,10,11,12]
3=[1,1,3]

1=[6,9]
2=[7,8,10,11,12]
3=[1,1,3,5]

1=[9]
2=[7,8,10,11,12]
3=[1,1,3,5,6]

1=[9]
2=[8,10,11,12]
3=[1,1,3,5,6,7]

1=[9]
2=[10,11,12]
3=[1,1,3,5,7,8]

1=[]
2=[10,11,12]
3=[1,1,3,5,7,8]

1=[]
2=[]
3=[1,1,3,5,7,8,10,11,12]

In this case, the maximum number of copies is Linear, while the maximum number of comparisons we are going to make is dependent on the total lengths of the lists, which is also Linear. We write this by:

O(len(L))+O(len(L))

Removing all multiplicative constant, this is the time taken to merge two lists:

O(len(L))

We can merge in Linear time.

To implement mergesort, we break up the list into a bunch of lists of length one, then we start merging them into sorted lists.

Since each merge is O(n) where n is the length of the list, and we call merge log2n depths, we end up with:

O(n log2n)

Mergesort is Logarithmic, and therefore a little slower than Linear, but a lot faster than Quadratic.

Here is the source code for a merge sort function:

def merge(L1,L2,compare):
    #Initialize an empty answer list
    ans=[]
    #Initialize positional counters
    i,j=0,0
    #As long as there are things to compare (i.e. It will run till the shorter list runs out)
    while i<len(L1) and j<len(L2):
        #Append the smaller of the two to the answer list
        if compare(L1[i],L2[j]):
            ans.append(L1[i])
            i+=1
        else:
            ans.append(L2[j])
            j+=1
    #Append the remaining list to the answer list
    while i<len(L1):
        ans.append(L1[i])
        i+=1
    while j<len(L2):
        ans.append(L2[j])
        j+=1
    #Return the list
    return ans

def mergeSort(L,compare):
    #If list is length of 1, return it
    if len(L)<2:
        return L
    else:
        #If list is longer than 1
        #Find the middle of the list
        middle=int(len(L)/2)
        #Attempt to mergeSort it further
        L1=mergeSort(L[0:middle],compare)
        L2=mergeSort(L[middle:len(L)],compare)
        #L1 and L2 should be sorted by now
        return merge(L1,L2,compare)

L=[2,4,5,6,8,12,12,12,1,3,5,6,7,13]
print(mergeSort(L,compare=lambda x,y:x<y))


Let us look at the base case for the mergeSort function:
If the list is length of 1, it is considered sorted

Next, we look at the recursive case:
If the list is more than length of 1, break them in half sort them, then merge them and return the answer.

At the point where mergeSort(L) where len(L)==2, it would be broken into L1 and L2 of list size 1 each. Next, it is merged and returned into its previous caller's L1 or L2.

The previous caller would then see sorted size 2 lists of L1 and L2 returned, and it would in turn try to merge them and return them, to its previous caller's L1 or L2.

Next, the previous caller would get sorted size 4 lists in its L1 and L2, and it will attempt to merge them.

Now, you would ask, what happens if the list is not a power of 2? For example, if the list size is 5:
1)The first iteration would call mergeSort(L[0:2]) and mergesort(L[2:5]).
    a)The first level 2 iteration would call mergeSort(L[0]) and mergeSort(L[1])
        i)The first level 3 iteration would return the 1 length list.
        ii)The second level 3 iteration would return the 1 length list.
    a)The first level 2 iteration would merge the two sorted lists and return it.
    b)The second level 2 iteration would call mergeSort(L[0]) and mergeSort(L[1:2])
        i)The first level 3 iteration would return the 1 length list.
        ii)The second level 3 iteration would call mergeSort(L[0]) and mergeSort[L[1]
            (1)The first level 4 iteration would return the 1 length list.
            (2)The second level 4 iteration would return the 1 length list.
        ii)The second level 3 iteration would merge the two lists and return it.
    b)The second level 2 iteration would merge the two sorted lists and return it.
1)The first iteration would merge the two sorted lists and return it.

1 comment :

  1. These bungalows have spas, swimming pools, gardens,
    open broadcast dining places even more. The same is true for a
    two-story house in Ottawa.

    My blog tanie wczasy nad morzem

    ReplyDelete

<