Table of Contents
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 947634, 5 pages
Research Article

The Middle Pivot Element Algorithm

1Department of Statistics, Patna University, Patna 800005, India
2Department of Applied Mathematics, Birla Institute of Technology (BIT Mesra), Ranchi 835215, India

Received 7 November 2012; Accepted 29 November 2012

Academic Editors: P. Amodio, R. López-Ruiz, and Q.-W. Wang

Copyright © 2012 Anchala Kumari and Soubhik Chakraborty. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper is an improvement over the previous work on New Sorting Algorithm first proposed by Sundararajan and Chakraborty (2007). Here we have taken the pivot element as the middle element of the array. We call this improved version Middle Pivot Element Algorithm (MPA) and it is found that MPA is much faster than the two algorithms RPA (Random Pivot element Algorithm) and FPA (First Pivot element Algorithm) in which the pivot element was selected either randomly or as the first element, respectively.

1. Introduction

One of the computational problems that an algorithm encounters is due to the multiple input parameters that has a direct effect on the execution time of the algorithm and we call this problem as one of parameterized complexity of the algorithm. In the recent past much of the work has been done to simulate the parameterized complexity of an algorithm under different situations. The present work is an improvement on the previous works by Sundararajan and Chakraborty [1] in which the New Sorting Algorithm was introduced. In order to make this paper self-contained, we are providing it again.

Step 1. Initialize the first element of the array as a pivot element.

Step 2. Starting from the second element, compare it to the pivot element.

Substep 1. If pivot < element then place the element in the last unfilled position of the temporary array (of same size as the original one).

Substep 2. If pivot ≥ element then place the element in the first unfilled position of the temporary array.

Step 3. Repeat Step 2 till the last element of the array.

Step 4. Finally place the pivot element in the blank position of the temporary array. (Remark: the blank position is created because one element of the original array was taken out as pivot).

Step 5. Split the array into two, based on the pivot element’s position.

Step 6. Repeat Steps 15 till the array is sorted completely.

Prashant et al. [2], Anchala and Chakraborty [3, 4], and a recent unpublished paper by the present authors entitled “Parameterized Complexity: A Statistical Approach Combining Factorial Experiments with Principal Component Analysis.” are some of the related works. This last paper compares the parameterized complexity of two algorithms called the RPA (Random Pivot element Algorithm) and FPA (First Pivot element algorithm) in respect to the new sorting technique developed by Sundararajan and Chakraborty [1]. In this paper we have taken the pivot element as the middle term of the array in new sorting technique and call this algorithm MPA (Middle Pivot element algorithm). Thus Step 1 in MPA should be read as follows.

Step 1. Initialize the middle element of the array as a pivot element.

The rest of the steps are the same as above.

The paper reveals two interesting features: (i) MPA is much faster than RPA and FPA and (ii) MPA is more efficient in sorting a large number of elements than what can be sorted by RPA and FPA. See Mahmoud [5] for a comprehensive literature on sorting with special emphasis on distribution theory.

2. Statistical Analysis of the MPA Algorithm

A 3-cube factorial experiment is employed to examine the singular and nonsingular effects of binomial inputs on the complexity. The three factors are (i) , the numbers to be sorted, and (ii) , and (iii) , the two parameters of the binomial ( , ) distribution. The three factors and their levels are given in Table 1.

Table 1: Factors and their levels.

Five runs for each of the 27 treatment combinations were made and execution time was obtained using Visual C++ code.

A 3-cube factorial experiment was conducted and analyzed using MINITAB 16 and the results are given below in Table 2. As there are five runs, so there will be replicates with degrees of freedom. The significant points of the variance ratio statistic are compared in Table 3.

Table 2: Results of 3-cube factorial experiment.
Table 3: Significant points of variance ratio ( ).

The interesting and important points to be noted from Table 4 are that irrespective of any value of , , and the sorting time is much smaller for MPA as compared to the other two algorithms RPA and FPA. In other words the New Sorting technique with pivot element as the middle element is faster and should be preferred.

Table 4: Execution time for different algorithms.

A systematic pattern is observed for changing the binomial parameters [2]. For fixed and fixed , the sorting time shows a decreasing trend in response to an increase in . While for fixed and , the sorting time is minimum at . At , the sorting time is less than or equal to the same at ; while in the other two algorithms, for fixed , the sorting time responds in a very erratic way, for example, when we change keeping fixed or change keeping fixed in RPA. On the other hand, in FPA for fixed and fixed , the sorting time shows almost decreasing trend while the probability points are increased. However, for fixed and , no systematic pattern is found in the time value corresponding to the change in the value of .

Now we study the behavior of the three factors separately on sorting time, in seconds, by observing -time, -time, and -time plots.

(1) -Time Plot
-time plots show that average time complexity can well be explained by a second degree polynomial with a value of . We may conclude that experimentally the complexity supports an O( ) complexity (See Figure 1).

Figure 1

(2) -Time Plot
From Figure 2 it is clear that complexity is of O( ).

Figure 2

(3) -Time Plot
From Figure 3 sorting time is of O( ).

Figure 3

We may conclude that as far as the singular inputs are considered, the experimental complexity can be well explained by second-degree polynomial in for each of the binomial inputs.

To make further investigations a second-degree composite response surface design was fitted using MINITAB statistical package and the results are given in Table 5.

Table 5: Analysis of variance for response surface design.

The Contour plots for different pairs are as given below.

(1) - Plot
The minimum response is found in the middle and the highest response is near the upper left corner (See Figure 4).

Figure 4

(2) - Plot
Minimum time of .7 seconds occurs near middle and goes up to 22 seconds when is held at middle point (See Figure 5).

Figure 5

(3) - Plot
At , minimum time of .7 seconds increases to 22 seconds which occurs in the lower right corner (See Figure 6).

Figure 6

For a target response of 1 second, the optimal solution is , and minimum time is ; minimum sorting time is .43 seconds.

Capital letter , indicates the probability of success in a trial for the Binomial variate and should not be confused with the small letter of -value in in Tables 2 and 5 which gives the smallest value of level of significance at which a test statistic (here the statistic) becomes significant. Level of significance is the probability of committing type I error, i.e., the probability of rejecting the null hypothesis when it is true.Thus both and represent two different probabilities.

Further results obtained are as follows:Condition number: ; -optimality (average leverage): 0.5000;Maximum leverage: 0.5946.

3. Conclusion

As the New Sorting method is concerned the pivot element as the middle element is preferable to the pivot element being the first and also the same being randomly selected. Also the new sorting technique with middle element as the pivot element is more efficient in the sense that it can sort a large number of elements. However, when compared with Quick sort for the same number of observations to be sorted, the latter is found to be faster. Our future work is to improve the method by deleting the auxiliary array which may further reduce the sorting time as well as save space complexity.


  1. K. K. Sundararajan and S. Chakraborty, “A new sorting algorithm,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 1037–1041, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Kumar, A. Kumari, and S. Chakraborty, “Parameterized complexity on a new sorting algorithm: a study in simulation, annals,” Annals Computer Science Series, vol. 8, no. 2, pp. 9–22, 2009. View at Google Scholar
  3. A. Kumari and S. Chakraborty, “Software complexity: a statistical case study through insertion sort,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 40–50, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Kumari and S. Chakarborty, “A simulation study on quick sort parameterized complexity using response surface design,” International Journal of Mathematical Modeling, Simulation and Applications, vol. 1, no. 4, pp. 448–458, 2008. View at Google Scholar
  5. H. Mahmoud, Sorting: A Distribution Theory, John Wiley Sons, 2000.