Abstract

This paper is an improvement over the previous work on New Sorting Algorithm first proposed by Sundararajan and Chakraborty (2007). Here we have taken the pivot element as the middle element of the array. We call this improved version Middle Pivot Element Algorithm (MPA) and it is found that MPA is much faster than the two algorithms RPA (Random Pivot element Algorithm) and FPA (First Pivot element Algorithm) in which the pivot element was selected either randomly or as the first element, respectively.

1. Introduction

One of the computational problems that an algorithm encounters is due to the multiple input parameters that has a direct effect on the execution time of the algorithm and we call this problem as one of parameterized complexity of the algorithm. In the recent past much of the work has been done to simulate the parameterized complexity of an algorithm under different situations. The present work is an improvement on the previous works by Sundararajan and Chakraborty [1] in which the New Sorting Algorithm was introduced. In order to make this paper self-contained, we are providing it again.

Step 1. Initialize the first element of the array as a pivot element.

Step 2. Starting from the second element, compare it to the pivot element.

Substep 1. If pivot < element then place the element in the last unfilled position of the temporary array (of same size as the original one).

Substep 2. If pivot ≥ element then place the element in the first unfilled position of the temporary array.

Step 3. Repeat Step 2 till the last element of the array.

Step 4. Finally place the pivot element in the blank position of the temporary array. (Remark: the blank position is created because one element of the original array was taken out as pivot).

Step 5. Split the array into two, based on the pivot element’s position.

Step 6. Repeat Steps 15 till the array is sorted completely.

Prashant et al. [2], Anchala and Chakraborty [3, 4], and a recent unpublished paper by the present authors entitled “Parameterized Complexity: A Statistical Approach Combining Factorial Experiments with Principal Component Analysis.” are some of the related works. This last paper compares the parameterized complexity of two algorithms called the RPA (Random Pivot element Algorithm) and FPA (First Pivot element algorithm) in respect to the new sorting technique developed by Sundararajan and Chakraborty [1]. In this paper we have taken the pivot element as the middle term of the array in new sorting technique and call this algorithm MPA (Middle Pivot element algorithm). Thus Step 1 in MPA should be read as follows.

Step 1. Initialize the middle element of the array as a pivot element.

The rest of the steps are the same as above.

The paper reveals two interesting features: (i) MPA is much faster than RPA and FPA and (ii) MPA is more efficient in sorting a large number of elements than what can be sorted by RPA and FPA. See Mahmoud [5] for a comprehensive literature on sorting with special emphasis on distribution theory.

2. Statistical Analysis of the MPA Algorithm

A 3-cube factorial experiment is employed to examine the singular and nonsingular effects of binomial inputs on the complexity. The three factors are (i) , the numbers to be sorted, and (ii) , and (iii) , the two parameters of the binomial ( , ) distribution. The three factors and their levels are given in Table 1.

Five runs for each of the 27 treatment combinations were made and execution time was obtained using Visual C++ code.

A 3-cube factorial experiment was conducted and analyzed using MINITAB 16 and the results are given below in Table 2. As there are five runs, so there will be replicates with degrees of freedom. The significant points of the variance ratio statistic are compared in Table 3.

The interesting and important points to be noted from Table 4 are that irrespective of any value of , , and the sorting time is much smaller for MPA as compared to the other two algorithms RPA and FPA. In other words the New Sorting technique with pivot element as the middle element is faster and should be preferred.

A systematic pattern is observed for changing the binomial parameters [2]. For fixed and fixed , the sorting time shows a decreasing trend in response to an increase in . While for fixed and , the sorting time is minimum at . At , the sorting time is less than or equal to the same at ; while in the other two algorithms, for fixed , the sorting time responds in a very erratic way, for example, when we change keeping fixed or change keeping fixed in RPA. On the other hand, in FPA for fixed and fixed , the sorting time shows almost decreasing trend while the probability points are increased. However, for fixed and , no systematic pattern is found in the time value corresponding to the change in the value of .

Now we study the behavior of the three factors separately on sorting time, in seconds, by observing -time, -time, and -time plots.

(1) -Time Plot
-time plots show that average time complexity can well be explained by a second degree polynomial with a value of . We may conclude that experimentally the complexity supports an O( ) complexity (See Figure 1).

(2) -Time Plot
From Figure 2 it is clear that complexity is of O( ).

(3) -Time Plot
From Figure 3 sorting time is of O( ).

We may conclude that as far as the singular inputs are considered, the experimental complexity can be well explained by second-degree polynomial in for each of the binomial inputs.

To make further investigations a second-degree composite response surface design was fitted using MINITAB statistical package and the results are given in Table 5.

The Contour plots for different pairs are as given below.

(1) - Plot
The minimum response is found in the middle and the highest response is near the upper left corner (See Figure 4).

(2) - Plot
Minimum time of .7 seconds occurs near middle and goes up to 22 seconds when is held at middle point (See Figure 5).

(3) - Plot
At , minimum time of .7 seconds increases to 22 seconds which occurs in the lower right corner (See Figure 6).

For a target response of 1 second, the optimal solution is , and minimum time is ; minimum sorting time is .43 seconds.

Remark
Capital letter , indicates the probability of success in a trial for the Binomial variate and should not be confused with the small letter of -value in in Tables 2 and 5 which gives the smallest value of level of significance at which a test statistic (here the statistic) becomes significant. Level of significance is the probability of committing type I error, i.e., the probability of rejecting the null hypothesis when it is true.Thus both and represent two different probabilities.

Further results obtained are as follows:Condition number: ; -optimality (average leverage): 0.5000;Maximum leverage: 0.5946.

3. Conclusion

As the New Sorting method is concerned the pivot element as the middle element is preferable to the pivot element being the first and also the same being randomly selected. Also the new sorting technique with middle element as the pivot element is more efficient in the sense that it can sort a large number of elements. However, when compared with Quick sort for the same number of observations to be sorted, the latter is found to be faster. Our future work is to improve the method by deleting the auxiliary array which may further reduce the sorting time as well as save space complexity.