Abstract

In a mobile computing environment, waiting time is an important indicator of customer satisfaction. In order to achieve better customer satisfaction and shorter waiting times, we need to overcome different constraints and make the best use of limited resources. In this study, we propose a minimization problem, allocating unequal-size data items to broadcast channels with various bandwidths. The main idea is to solve the problem in the continuous space . First, we map the discrete optimization problem from to . Second, the mapped problem is solved in optimally. Finally, we map the optimal solution from back to . With the theoretical analyses, we can ensure the solution quality and execution speed. Computational experiments show that the proposed algorithm performs well. The worst mean relative error can be reduced to 0.353 for data items with a mean size of 100. Moreover, almost all the near-optimal solutions can be obtained within 1 millisecond, even for , where is the number of data items, that is, the problem size.

1. Introduction

Broadcasting is an efficient mechanism to transmit information in a mobile computing environment. Popular messages (e.g., weather reports) or instant information (e.g., stock quotes) can be widely disseminated via the broadcast mechanism. This success is mainly due to the high bandwidths of downlinks, which are used for the transmission of data items from a broadcast server to unlimited mobile users. Note that the bandwidths of channels and the sizes of data items might be unequal. The simplest strategy is to equally allocate data items to all channels, or load balance, but it is not the best way to reduce waiting time. Consequently, sophisticated broadcast scheduling or data partition algorithms are called for.

Waiting time, or expected delay, is an important indicator for measuring broadcast performance, for the waiting times of mobile users directly influence their customer satisfaction [15]. For example, Chien and Lin [3] found that customers’ waiting experiences may negatively affect their attitudes towards a given service. Moreover, improving the service implies the improvement of user experiences. Therefore, a good broadcast mechanism that is able to reduce waiting time can achieve better customer satisfaction.

Let us observe the simplest form of such problems, that is, . These problems feature single-item queries, multiple channels, and skewed data preferences. Several equal-bandwidth broadcast channels and multiple equal-size data items need to be broadcast over multiple channels periodically. In a mobile computing environment, users can download their desired items via their mobile devices, such as smartphones. Imagine that we allocate a few popular items to a channel, that is, a short cycle length, and other ordinary items to another channel, that is, a long cycle length. Most users can download popular items in a short time, without a long wait. Clearly, access probability and cycle length directly influence broadcast scheduling. Note the unbalanced workloads of these channels (i.e., the different amounts of data) in an optimal allocation. Once the optimal schedule is found, we can achieve shorter waiting times and better customer satisfaction. However, determining optimal schedules requires much execution time. That is, the time complexity excludes the related optimization algorithms from practical use.

Now we consider some complicated forms of such problems, that is, . These problems may have unequal-size data items or various bandwidths. There are still multiple channels and skewed preferences for data items. For example, in [6], the authors assumed that the bandwidths were different. This consideration makes this problem more difficult. In [7], the authors considered another problem, in which the sizes of data items are unequal. This assumption makes the problem more flexible and practical.

In recent years, various other forms have also been studied. In [8], mobile users could download multiple items at a time by sending a simple query. The authors assumed that two queries might have some items in common. To shorten the broadcast cycle length, the duplicate items were centralized and allocated to the same channel. In [9], video was broadcast on a single channel, so the data size was variable. In [10], mobile users could download multiple items in a multichannel environment. However, the wireless links were unreliable, so disconnections occurred frequently. In that study, reducing the waiting time was not the first priority. Instead, the authors aimed to minimize the deadline miss rate. All of the above considerations make the problems more complicated. Obviously, these problems cannot be solved easily or optimally when the problem size is large, so we need some more efficient algorithms to deal with these problems.

Such problems are usually time-consuming or even NP-hard, so several intuitive or metaheuristic algorithms have been proposed. However, traditional algorithms have some shortcomings. For example, although dynamic programming [7, 12] and branch-and-bound algorithms [13, 14] can provide optimal solutions, they are time-consuming and cannot be applied to large problem instances; for example, . On the other hand, some metaheuristic algorithms [8] or greedy approaches [12, 15, 16] generate solutions very quickly. Nevertheless, their solution qualities are not stable. The reason is that their searches are done by random walks in their solution spaces. For the same problem instance, multiple executions of an identical algorithm can generate different solutions. Moreover, as in the case of tabu search, a great amount of memory is needed for keeping track of past experiences. Such algorithms are also unsuitable for large problem instances.

In this study, we consider a waiting time minimization problem (abbreviated as WTM). To make this problem more flexible and practical, meaning that unequal bandwidths and various data sizes are allowed, we propose a linearly convergent algorithm based on a steepest decent technique. First, WTM is mapped from (the discretized space) to (the continuous space). Next, WTM′ in is solved optimally in linear time. Finally, the optimal solution is mapped from back to .

The rest of this study is organized as follows. Section 2 gives the formal definition of the proposed problem. Section 3 establishes the theoretical basis for the problem. Section 4 presents the linearly convergent algorithm. This study is compared with past research in Section 5. In Section 6, computational experiments are conducted to evaluate the performance of the proposed algorithm. Finally, conclusions are drawn in Section 7.

2. Problem Formulation

The waiting time minimization problem (WTM) is formulated as follows. There is a database to be broadcast in a mobile computing environment, where is the th data item for . Let and denote the access probability and the size of , respectively. The total amount of data is . Assume that the access pattern , that is, the sequence of , is given in advance. Assume that a broadcast server is equipped with channels, numbered from 1 to . We let the bandwidth of channel be . Without loss of generality, assume that for all . Each channel is divided into time slices of equal size, called buckets. We need to partition into parts based on the access pattern and assign each part to one of the channels, one item to several consecutive buckets. Then broadcast programs are formed, and each of them will be broadcast cyclically. The average waiting time is defined as the average amount of waiting time spent by each user before he/she receives the desired data item. Finally, the objective is to minimize the average waiting time of the programs under the above assumptions. The objective function can be written as min, where means is allocated to channel or machine .

Three properties regarding WTM are discussed as follows. First, for each single channel , its corresponding broadcast program cycle length is . Then the expected waiting time in receiving on the channel is . If is much larger than , the expected waiting time can be simplified to . Namely, the position order of data items makes no difference on the final result, that is, the average waiting time. Second, the average waiting time can be determined only by data partition. That is, the average waiting time is the weighted sum of the average waiting time on each channel, that is, . Third, such minimization problems are time-consuming or even NP-hard [6, 16, 17]. Consequently, some efficient minimization algorithms are called for. With the above observation, we can define proper objective functions in the next section.

3. Theoretical Basis

In this section, a steepest decent technique is employed to solve the problem. First, we map the WTM problem from to a new problem WTM′ in . Second, we employ a steepest decent technique [11, 18] to obtain the optimal solution to WTM′ in . Finally, we map the optimal solution from back to . The main idea behind the transformation is to improve the solution quality and execution speed. Although WTM can be solved optimally by some optimization algorithms (e.g., a branch-and-bound algorithm) in , it is too time-consuming when is large. On the other hand, though some metaheuristic algorithms (e.g., GA) are able to provide instant solutions in , their solution quality is not guaranteed. For the same problem instance, they may provide different solutions. Consequently, instead of directly optimizing WTM in , we map WTM to and take advantage of the linear convergence and optimality of the gradient-based technique in .

The parameters used in this study are summarized in Parameters at the end of the paper. Parameters , , and are defined earlier. Parameters and are the access patterns used in different spaces, that is, and . We also need two cumulative functions, , , and two interpolating functions, , , to define the objective functions in and   in , respectively. Moreover, and are partition vectors or position vectors.

3.1. Mapping WTM from to

First, two objective functions for both WTM and WTM′ are defined. Then we show how to map WTM from to WTM′ in . Finally, we prove that the geometric properties, such as concavity, of both WTM and WTM′ are similar. These proofs ensure that both solution spaces are close to each other.

The relationship between WTM and WTM′ is similar to that between the 0-1 knapsack problem and the fractional knapsack problem [19]. If we can solve the problems in optimally, then the rounded solutions mapped back to are therefore near-optimal solutions to the original problems. To show this relationship, two proper objective functions play an important role. Namely, the objective functions of WTM and WTM′ must resemble each other. Once the two similar objective functions are determined, we can claim that the optimal solution of WTM′ is very close to that of WTM.

Definition 1 helps us to transform the data partition problem into a sequence partition problem. Since the position order of an access pattern will lead to different results, we need the following definition to help us to ensure the optimality of WTM and WTM′.

Definition 1. Given an optimal solution, the programs can be concatenated into an optimal program. Let denote the sequence of that has the same order as the optimal program for WTM. Similarly, let denote the sequence of that has the same order as the optimal program for WTM′.

WTM and WTM′ have different preferences for the order of access pattern. Consider the relationship between the 0-1 knapsack problem and the fractional knapsack problem again. For the fractional knapsack problem, if we take the items of greatest unit value one by one in a preemptive manner, we can easily achieve the maximum objective cost [19]. Thus we try to solve the continuous-case WTM′ by sorting the items in descending order of . That is, the sequence is obtained by sorting in nonincreasing order. On the other hand, for the 0-1 knapsack problem, we may achieve the optimal solution but sacrifice some valuable items because they are too big to fit the knapsack. Clearly, the optimal item partition of the 0-1 knapsack problem cannot be solved by using simple sorting rules. That is, when we reduce an item partition problem to a sequence partition problem, it is difficult to determine the optimal sequence for partition. Consequently, we do not aim to find the optimal sequence for WTM. Instead, we just use for WTM′ to simulate .

Definitions 2 and 3 introduce two cumulative functions regarding the access pattern . With the two cumulative functions, we can define a new objective function in in a more comprehensive way for later transformation.

Definition 2. Given , the function of cumulative probability is defined bySimilarly, we define another function to express the cycle length of a broadcast program. The function is defined as follows.

Definition 3. Given , the function of the cumulative data size is defined byNow we redefine the objective function for the original problem WTM in . The original objective function is defined from the viewpoint of data partition, whereas the new objective function is formulated from the viewpoint of sequence partition. With the cumulative functions and , we can determine a proper sequence in advance and then perform partition on the sequence. For simplicity, the leading coefficient of expected waiting time of each channel, 0.5, is omitted in the rest of this study.

Definition 4. Given and two constants , for any column vector with , the objective function of WTM is defined bywhere is the bandwidth of channel .

Note that data items numbered are allocated to channel . Therefore, the access probability of channel is and the program cycle length of the channel is .

An interpolating function regarding access probability is defined for mapping WTM from to . To preserve the geometric properties, we interpolate the points , , by using separate line segments. The interpolating function is defined as follows.

Definition 5. For , consider the th interval and let the straight line pass through the two points and . The interpolating function is given by An interpolating function regarding program cycle length is also defined as follows. We let it interpolate the points , , by using separate line segments.

Definition 6. For , consider the th interval and let the straight line pass through the two points and . The interpolating function is given by Just as we define the objective function for WTM in , we define a dummy objective function for WTM′ in as follows.

Definition 7. Given and two constants , , for any position vector with , the objective function of WTM′ is defined bywhere , are the interpolating functions and is the bandwidth of channel .

Since and have the same objective costs at all grid points, the solution spaces of WTM and WTM′ are of similar geometric properties, for example, slope, extreme, and concavity. Lemma 8 shows that both functions agree with each other at all grid points.

Lemma 8. Given , for any ,

Proof. Note that and for all . Then The proof is complete.

By Lemma 8, and have the same function values at grid points. That is, they have similar geometric properties. Therefore, the optimal solution to WTM in is close to that of WTM′ in . The optimal solutions to WTM and WTM′ are also close to each other.

3.2. Optimal Solution to WTM′

In this subsection, we solve WTM′ optimally and obtain the optimal solution in . First, we introduce the concept of gradient. Then the optimality and convergence speed are discussed.

The steepest decent technique we employed is based on the concept of the gradient [11]. Unlike other metaheuristic algorithms, the steepest decent technique always converges at the global minimum instead of piecing several local minimums. Moreover, this steepest decent technique converges in dimensions in linear time instead of performing meaningless random walks. The notation of the gradient is defined as follows.

Definition 9. Let : be a continuous differentiable multivariable function. The gradient of at is denoted by and defined byBecause is not differentiable at , we remedy this shortcoming by improper limit. The th element of is modified slightly and redefined asNote that and are therefore right-hand differentiable for . Thus, the gradient of   can be obtained for any , and the th element of becomes for three integers with , , and . More specifically, when programming, we do not need to find the derivative of for obtaining the corresponding slope. Instead, we obtain the slope by letting . Hence, we eliminate annoying differentiation procedures in the proposed algorithm.

Similar algorithms are found in [2, 11, 20, 21]. We can modify them slightly in order to obtain the optimal solution in . The details of the algorithm will be presented in the next section. Here, we show only the basic steps of the steepest decent technique as follows:(1)Evaluate at an initial position .(2)Determine the steepest decent direction from that results in a decrease in the value of .(3)Move an appropriate amount (i.e., step size) in this direction, and the new position is .(4)Set .(5)Repeat steps (1) through (4) until the optimal solution is obtained.In order to implement the algorithm easily, we reduce the equation to a single variable functionNote that the value that minimizes is also the value needed for (11). Because the root-finding process in (12) requires much execution time, Burden and Faires [11] employed a quadratic polynomial to interpolate in order to accelerate the root-finding process. The details regarding the quadratic polynomial will be shown in the next section.

The proposed algorithm converges linearly and the proofs of convergence are omitted. Readers can refer to [11, 18]. Even if this algorithm converges rapidly, an accurate initial solution can accelerate the convergence speed more. The following definition and lemma help us to choose an accurate initial solution.

Definition 10. Let , be two constants. For any position vector , the length of the th interval is denoted by and defined byfor . Likewise, the difference of caused by is defined byfor .
The following lemma shows how to obtain an accurate initial solution by determining the elements of a position vector or partition vector .

Lemma 11. If the optimal solution to the dummy objective function is given, then for .

Proof. We show it by contradiction. Suppose for some . Since is the optimal solution to , the sum of products should be the minimal in the interval . On the other hand, since , there exists a number such that . We claim thatLet , , , , , , and . Since the sequence is sorted by in descending order, implies that . That is, if , then . ThenIt contradicts that is the optimal solution. The proof is complete.

By the above lemma, the proposed algorithm is therefore able to start from a better initial position . We find , and such that , where . In the next section, we set instead of randomly choosing . This initial position will also enhance the convergence speed.

Note that the partition vector is a global minimizer of WTM′. We prove this property by showing that is concave upward for all . Namely, for any initial vector, the algorithm converges to the global minimum.

Lemma 12. Let the algorithm converge locally at some . Then is the global optimal solution to WTM′.

Proof. We prove the property by showing that, for any , is concave upward. From (10), we getfor . Note that and are two constants. Then the second partial derivative isBecause interpolates the strictly increasing function , it is clear that . We have , since is composed of straight lines. Similarly, has the same properties. Therefore, for all . The proof is complete.

3.3. Near-Optimal Solution to WTM

We show how to obtain a near-optimal solution of WTM in this subsection. First, we remap the optimal solution in back to by rounding off all elements of , and a rounded solution is obtained. The following lemma shows the magnitude order of , , , and , where minimizes and minimizes .

Lemma 13. Let be the rounded resulting solution to WTM. Then and are two bounds of withand the Euclidean distance between and is

Proof. Since minimizes . On the other hand, we need to show . Note that is the optimal sequence to the discrete-case problem WTM and that it is difficult to determine. We just assume that is the optimal solution to . Since is obtained by some simple sorting rule and dedicated to the continuous-case problem WTM′, the sequence is more suitable for the discrete-case objective function, , than . Therefore, we obtain .
We show by contradiction. Suppose . On the other hand, by Lemma 8, , since . Therefore, . However, is the optimal solution to WTM′. It is a contradiction.
Since each is rounded from , it is obvious that and thus . The proof is complete.

By Lemma 13, we know that is bounded by and or , since by Lemma 8. This guarantees that the proposed algorithm will output near-optimal broadcast programs.

4. Proposed Algorithm

Algorithm 1 shows the proposed algorithm GRA. In the first step, we prepare the cumulative functions and according to and , respectively. Then we also construct the dummy objective function , and we set the initial value of . All these elements of need to satisfy that (see Lemma 11). In Steps 3 and 4, we evaluate the magnitude of at and determine the steepest decent direction. If a zero gradient occurs, the algorithm will stop. In Steps 5–8, we need to find a new position whose magnitude of is smaller than the current one. Then we move a distance of towards the steepest decent direction, and is replaced with the new position (Steps 9–13). We check if any stopping criterion is met in Step 14. In Step 15, we employ to limit the total iterations; thus, the algorithm will stop anyway.

Procedure GRA (, D, , TOL, )
INPUT: the number of channels , the database D, the sorted access pattern ,
    the tolerance TOL, the maximal number of iterations .
OUTPUT: the near-optimal solution to WTM.
Step  1    Calculate , , and construct corresponding , ;
     Construct the dummy objective function ;
     Set ; .
Step  2 While () do Steps  3–15.
Step  3   Set ; ; .
      //Note that is the Euclidean distance of .
Step  4   If () then
       Output “”, “Zero gradient!”;
       Stop.
Step  5   Set ; ; ; .
Step  6   While ) do Steps  7 and 8.
Step  7    Set ; .
Step  8    If () then
         Output “, ”, “No more improvement!”;
         Stop.
Step  9   Set ; .
Step  10     Set ; ; .
      //Note that we use Newton’s forward divided-difference formula [11] to find a quadratic
      //polynomial which interpolates at , , .
Step  11      Set ; .
      //Note that the critical point of occurs at .
Step  12     Find from so that .
Step  13     Set .
Step  14     If () then
         Set ;
         Output “, , ”, and “Success!”;
        Stop.
Step  15     Set .
Step  16  Output “, ”, “Maximum iterations exceeded!”.
    Stop.

5. Comparison with Other Research

In the real world, such minimization problems usually call for instant and near-optimal solutions, especially for large problem instances. As a result, many metaheuristic algorithms have been proposed for providing instant and near-optimal solutions, for example, [2224]. Although these algorithms achieved better execution time, their solution quality cannot be ensured or bounded.

Therefore, we need deterministic algorithms that are able to converge linearly and achieve near-optimality. Jea et al. [20] solved a basic similar problem, DAP, that is, . Here, the number of machines or channels is denoted by , literally meaning in this study. The terms in the square bracket are in a very simple form. Even so, the partition problem is also time-consuming for obtaining an optimal solution when the problem size is larger than 50. This problem was solved near-optimally by their proposed algorithm [20]. Wang and Jea [21] proposed another partition problem, SPP, that is, . The terms in the square bracket become slightly complicated. This problem was also solved near-optimally in linear time. Jea and Wang [2] introduced another partition problem, DBAP; that is, . The terms in the square bracket look similar to those of SPP. However, the transformation between and becomes more difficult. After establishing a complete theoretical basis, this problem was also solved near-optimally.

On the other hand, Wang and Chen [25] proposed another minimization problem, subject to and for all nonempty . The constraints and absolute values make the problem harder. It is interesting that the discretized problem could also be solved by the same technique in .

In this study, we propose the WTM problem, that is, . As shown in Figure 1, WTM is the most complicated form. WTM relates to not only partition but also permutation. In the three problems, DAP, SPP, and DBAP, the position orders of items in and are the same. However, for WTM, the position orders of items in and might be different. This makes transformation between and more difficult, so we sacrifice some accuracy of position order and force the transformation to be done. Even so, WTM describes a general form of such problems and still achieves near optimality.

6. Computational Experiments

The experiments are divided into three parts. First, since the real-world situations are complicated, we need to determine some significant system settings. We develop a basic genetic algorithm (GA) to conduct a pilot experiment for determining these settings. Second, when the problem size is small, both algorithms (i.e., GRA and GA) are compared with an exhaustive search algorithm. Third, when the problem size is large, we compare GA and GRA to evaluate their solution quality and execution speed.

Table 1 summarizes the parameters used in the experiments. Parameters , , , , and have already been defined in Section 2. Access probability follows a Zipf distribution with parameter [20]. Item size follows a discrete normal distribution with parameters and . Bandwidth follows a discrete uniform distribution . For GA, the population size, crossover rate, and mutation rate are 100, 0.8, and 0.05. For GRA, the maximum iteration number and tolerance are 10,000 and 0.01, respectively. All the proposed algorithms were implemented in PASCAL and executed in a Windows 7 environment on an Intel Xeon E3 1230 @ 3.20 GHz with 8 GB RAM. For each setting, 30 random trials were conducted and recorded.

In the first part, we develop a basic genetic algorithm (GA) to test the performance. Each chromosome is randomly generated. For example, let , and generate random values, 0.42, 0.95, 0.13, 0.21, and 0.36. According to their magnitudes, the largest number 0.95 means a channel separator. That is, item 4 (having the fourth smallest value, 0.42) is allocated to channel 1, and items 1, 2, and 3 are allocated to channel 2. For each population, there are 100 random chromosomes. The fitness is defined as , for , where is the objective cost achieved by the th chromosome. A standard roulette wheel selection is employed, and two parent chromosomes are selected for generating two child chromosomes by a two-point crossover. Moreover, a simple single-point mutation is adopted. GA will terminate if the run time is over milliseconds or no improvement is made during the recent 100 generations.

In the second part, Figure 2 shows the effect of on the performance for . The relative error is defined by or , where , , and are objective costs obtained by GA, GRA, and an exhaustive search algorithm. It is seen that GA takes more run time when increases. Moreover, does not influence GRA and most instances can be solved within 1 millisecond. On the other hand, GA can easily become trapped in local minimums, since there are many local minimums for such an optimization problem. Consequently, the relative error of GA is relatively higher. Since the setting of has the least significance, we omit it in later experiments.

Similarly, Figures 36 show various effects on the performance for . As shown in Figures 3 and 4, identical access probability and equal bandwidth make the problem easier, so we only observe larger and in later experiments. In Figures 5 and 6, when and increase, the relative errors increase slightly. The worst mean relative error is 0.353 when ; that is, the data items are large. Therefore, we only test the instances with larger item sizes in later experiments.

Figure 7 compares two algorithms for in terms of execution speed and solution quality. Again, GRA almost takes no time to obtain near-optimal solutions, whereas GA needs 3 seconds on average. For the setting of , there are more local minimums than . It becomes more difficult to locate the optimal solution. Therefore, GA takes more run time but still has larger relative errors.

In the third part, Figures 8 and 9 show the performance of two algorithms when the problem size is large. Since we cannot obtain optimal solutions when is large, we compute their relative deviations. The relative deviations are defined by and , where . GRA outperforms GA greatly in terms of solution quality and execution speed. For most instances, they can be solved within 1 millisecond for and . Even for the worst case, GRA takes only 16 milliseconds. On the other hand, GA cannot jump out of local minimums, no matter how many trials it tries. Its solution quality depends highly on the initial population. If there are no high-quality solutions at the beginning, it is difficult for GA to locate the optimal solution in the -dimensional solution space.

Figure 10 shows the effect of on execution time. To observe the average execution time of GRA, we set 1,000, , , , , and to 50. Since GA cannot compete with GRA for solution quality when , we do not examine the behavior of GA for such a large problem size. As shown in the figure, when increases, the run time of GRA slightly increases. It means the number of channels is directly proportional to the run time of GRA. In fact, the number of channels of a mobile environment is far lower than 50. It implies that GRA is able to deal with scheduling 1000 data items for any real-world broadcast server.

Figure 11 shows the effect of on convergence speed. In this experiment, we set , , , , , and to 1.0. When and , all the data items are of the same popularity. However, the sizes of the data items are different and the bandwidths of the channels are also distinct. These variations make it difficult for GRA to converge. On the other hand, as , there are only a few popular data items which should be allocated to channel 1 (i.e., that with the highest bandwidth). The other items with very low access probabilities can be roughly allocated to the remaining channels. Therefore, GRA converges very rapidly when .

In Figure 12, we implement an optimization algorithm, that is, a branch-and-bound algorithm (B&B), and compare it with GA and GRA. Since B&B is very time-consuming, we only observe the results for . Both B&B and GRA can provide optimal solutions when . On the other hand, the average run time of B&B for is 30 seconds, whereas the run time of GRA is always less than 4 seconds, even for . It is clear that the time complexity will exclude such optimization algorithms from practical use.

In sum, GRA is a practical algorithm, even for . As compared with the other two algorithms, GRA is more suitable for application in the real world. Moreover, we also guarantee that each solution is generated within a linear time and give an error bound.

7. Conclusion

Minimization problems are usually time-consuming, especially for large problem instances. Consequently, most traditional studies have employed metaheuristic algorithms to solve such problems. However, such algorithms have several shortcomings. First, their solutions are obtained by trial and error, so solution quality is not guaranteed. Second, their approximation algorithms cannot converge linearly. Third, some traditional methods need to keep track of partial results, so they are memory-consuming. Fourth, some traditional methods, such as dynamic programming and branch-and-bound algorithms, are not scalable. Once the problem size increases, it may take several days to generate an optimal solution, and such a delay is impractical.

Mapping a discretized problem from to is an interesting idea. In this study, a gradient-based algorithm is proposed to deal with WTM. We first map it from to . Then the mapped problem is solved optimally in . Finally, the optimal solution is mapped from back to . Moreover, the theoretical basis ensures that the proposed algorithm can converge linearly, provide high-quality solutions, and require less memory.

In the near future, we will extend the concept to other optimization problems. By mapping a problem from its original domain to another domain, we are likely to find a more time-efficient and cost-effective way to achieve similar results.

Parameters

:Number of data items
:Summation of all
:Number of channels
:Access pattern (sequence of )
:Access pattern for WTM
:Access pattern for WTM′
:Cumulative function defined by ()
:Cumulative function defined by ()
:Position vector
:Position vector
:Objective function of WTM
:Objective function of WTM′
:Interpolating function passing through all points ()
:Interpolating function passing through all points ()
:Optimal solution to
:Optimal solution to
:Optimal solution to
:Near-optimal solution to .

Competing Interests

The author declares no competing interests regarding the publication of this paper.