ISRN Electronics

Volumeย 2012ย (2012), Article IDย 859820, 10 pages

http://dx.doi.org/10.5402/2012/859820

## Polynomial Time Instances for the IKHO Problem

^{1}Department of Computer Science, University of Verona, 37134 Verona, Italy^{2}Department of Information Engineering and Computer Science, University of Trento, 38123 Povo, Italy

Received 20 January 2012; Accepted 7 February 2012

Academic Editors: C. W.ย Chiou and T. L.ย Kunii

Copyright ยฉ 2012 Romeo Rizzi and Luca Nardin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The Interactive Knapsacks Heuristic Optimization (IKHO) problem is a particular knapsacks model in which, given an array of knapsacks, every insertion in a knapsack affects also the other knapsacks, in terms of weight and profit. The IKHO model was introduced by Isto Aho to model instances of the load clipping problem. The IKHO problem is known to be APX-hard and, motivated by this negative fact, Aho exhibited a few classes of polynomial instances for the IKHO problem. These instances were obtained by limiting the ranges of two structural parameters, *c* and *u*, which describe the extent to which an insertion in a knapsack in uences the nearby knapsacks. We identify a new and broad class of instances allowing for a polynomial time algorithm. More precisely, we show that the restriction of IKHO to instances where is bounded by a constant can be solved in polynomial time, using dynamic programming.

#### 1. Introduction

Interactive Knapsacks Heuristic Optimization problem (IKHO) is a particular knapsacks model in which, given an array of knapsacks, an insertion in a knapsack influences the nearest knapsacks, in terms both of weight and of profit. It was introduced by Aho in [1], for solving the load clipping problem arising in electricity management application. It belongs to the general framework of the Interactive Knapsacks problems (IK) (also defined in [1]) which has several other applications, for example, in electricity management, single and multiprocessor scheduling, and packing of -dimensional items to different knapsacks. Since IKHO is NP-complete [1] and APX-hard [2], the research of polynomial time instances is very important. In [3], Aho introduces a few classes of such instances restricting the values of certain parameters of the problem: and , which determine the dimension of the influence on other knapsacks caused by an insertion, and , that limits the number of insertions. We keep on this line of investigation by adding a wide and significant class of polynomial time instances for the IKHO problem in the case when is bounded.

Intuitively, in IKHO, when we insert an item in a knapsack, this item is replicated (*cloned*) to the next knapsacks (hence forming a *cloning block* over consecutive knapsacks), and it causes an arbitrary but predetermined modification (*radiation*) of the weight and profit of the knapsacks at distance at most from the cloning block (on both sides of the cloning block). After a knapsack is involved in a cloning operation, we are not allowed to insert any other item in that knapsack. Therefore, the cloning blocks are disjoint. In this paper, we are mainly interested in the case where the ratio between the whole width of the influenced zone (cloning plus radiation zones) and the width of the cloning part is bounded by a constant . We propose a dynamic programming algorithm based on a matrix of size , and having time complexity of , where is the number of knapsacks and represents the maximum number of cloning block that we can insert in the knapsacks array.

In Section 2, we give the original formulation of the problem from [1] and we then simplify it to ease our exposition in later sections. In Section 3, we give the algorithm. In Section 4, we sharpen the complexity result. Finally, in Section 5, we design a memory saving version of that algorithm.

We conclude this section by defining some useful notation. Henceforth, we write to denote a zero constant vector of length , that is, for . Moreover, if , we write to indicate the concatenation of the two binary strings. Furthermore, always denotes a range of integers and, if , we assume that is empty. In the same way, if , the notation * for * means * for no *.

#### 2. Formulation of the IKHO Problem

We are given an array of knapsacks, each one of capacity , for . There is a single item that we are asked to insert at most times in the knapsacks array, where is a natural given as part of the input. The profit and weight of an insertion depend on the knapsack in which we insert for the naturals and represent the weight and the profit of an insertion in the knapsack . The main feature ofโโIK problems is that every insertion has also an influence on the weight and profit of the nearby knapsacks. In this way, the weight charged on and the profit relative to a knapsack are established by the insertions in all the knapsacks. To describe this mechanism, Aho introduces a function (called interactive function) for each knapsack , that determines the interaction from a knapsack to every other knapsack. In particular, given naturals and , for each knapsack , we know that
The range is called *cloning block* and the range โ*radiation part*. The role of the functions gets clear in formulas (2)โ(5).

The decision variables used to denote in which knapsacks we do the insertions are for . Given in input and , an IKHO problem is to
where in (5). Clearly, since a feasible must belong to , then at most one item may be put in each single knapsack. Moreover, notice that in Constraint (3), imposing that the knapsacks are not overfilled, is multiplied by the weight . Thus, when we insert in the knapsack , the weight is equivalently charged on the knapsacks , for which . This is the reason of calling the range * cloning block*. Regarding the knapsacks in the radiation part , we get that an arbitrary portion of the weight is added or subtracted (since can be negative) to them. Similar operations are performed in the maximization function (2) with the profits . Furthermore, Constraint (4) specifies the maximum number of cloning blocks to be put into the knapsacks array, while Constraint (5) tells that the cloning blocks must be disjoint. The IKHO model is more widely explained and motivated in [1, 4].

##### 2.1. Simplifying the Formulation

Our first step here is to simplify the formulation of the problem by making the notion of weight independent from the interaction functions, and the profit dependent only on the knapsack where we insert. This is accomplished by exploiting the transformation proposed by Aho in [2], in order to reduce IKHO to MDKP, an IPL formulation surveyed in [5]. We define so that represents the total profit of an insertion in the knapsack , and is the weight that an insertion in the knapsack charges over the knapsack . From the features of exposed in (1), it follows that and are both rational numbers. Thus, notice that they can be also negative.

Now, we can reformulate the problem as follows: Henceforth we refer always to the latter formulation of the problem, since it simplifies the description of the algorithm.

Let us restate the behavior of parameter as inherited by functions . For , we have that

##### 2.2. Polynomial Time Instances

The classes of instances isolated by Aho are the following: (a) the instances with ; (b)those with ; (c)those with ; (d)those with , for a constant .

The restriction of IKHO obtained by considering only the instances in (a) corresponds to the situation in which there are no interactions, whence the decision on whether to insert an item can be taken independently on each knapsack. As for (b), notice that any instance of IKHO admits at most feasible solutions, which is only a polynomial number of possibilities whenever . We refer to [3] for details on Aho's algorithm for instances of type (c) and (d).

In Section 3, we describe an algorithm for IKHO that has time complexity of . When is bounded by a constant, it clearly becomes a polynomial time algorithm. Indeed, note that the term is polynomial also when , for a constant . In fact, , and is a decreasing function on . Therefore, our results imply those reported in (a), (c), and (d).

#### 3. The Algorithm

In the following, always denotes the input IKHO instance. Let . A binary string is called a *signature* if it obeys Constraint (5), that is, if for all such that . We denote by the set of all signatures.

Given a solution , denotes the number of insertions prescribed by . Moreover, for each , is the weight charged on the knapsack by the solution . Then, we say that a solution obeys the *capacity constraint* (Constraint (8)) on the knapsacks if and only if for each . Furthermore, we write when for each and for each , that is, when โโ*starts with* the signature in the knapsacks , whence having the form .

##### 3.1. The Subproblems of Our DP Approach

Given a natural , a natural , and a signature , we consider a modified problem , whose solutions are those which obey The objective function is the same as in the IKHO formulation. The differences between the IKHO problem and the above-defined subproblems are in the additional parameters , , and their use in the constraints. (i)Constraint (10) fixes the first insertions in compliance to the signature . (ii)The range on which we check the capacity constraint in (11) is , a subset of the range checked in IKHO. (iii)By Constraint (12), we can do at most insertions. Notice that in general (12) is more restrictive than (4).

In the following, we denote by the space of solutions to the IKHO instance , and by the space of solutions to the modified problem . Moreover, let be the maximum value of a solution in and the maximum value of a solution in . It is assumed that when is empty.

##### 3.2. The Dynamic Programming Algorithm

Our dynamic programming approach is based on Lemmas 1 and 2, whose proofs are given later in this subsection. In particular, Lemma 1 shows how to read out an optimal solution to the IKHO instance, from the optimal solutions to the subproblems.

Lemma 1. *Let be the set of the signatures such that obeys the capacity constraint on all knapsacks . Then,
**
Indeed, .*

Lemma 2 explains how to recursively solve the subproblems. We need some additional notation. Given , , for , we write to denote the binary string obtained from by setting its -th element to . Moreover, if , we let . Furthermore, for each , and , a signature is called *-good* if obeys the capacity constraint for the knapsack and if .

Lemma 2. *For , , ,
**
Indeed,
*

The base for the recursion, that is, the cases where and , is handled in Section 3.3.

In order to prove Lemmas 1 and 2, let us begin by pointing out some basic facts that directly derive from the IKHO formulation. Observations 1 and 2 play an important role in the formal proofs of these lemmas. For this reason, these observations and their proofs are visualized in Figures 1 and 2, respectively.

*Observation 1. *Assume . Let such that for . Then, for each , satisfies the capacity constraint if and only if satisfies it. Indeed, for each , .

*Proof. *Let . Remember that . However, by (9), for , that is, when . Moreover, for , since . Therefore,

Observation 2 covers the left/right-reverse situation.

*Observation 2. *Assume . Let such that for each . Then, for each , satisfies the capacity constraint if and only if satisfies it. Indeed, for each , .

*Proof. *Let . By (9), for , that is, when . Moreover, for , since . Therefore,

Now, we are ready to prove Lemmas 1 and 2.

*Proof of Lemma 1. *First, let us show that every feasible solution to IKHO is a feasible solution to one of the subproblems for an . Clearly, for each , taking , we get that . By exploiting Observation 1 with , we get that obeys the capacity constraint on knapsacks , and then . To prove the opposite inclusion, take and . We show that . Constraint (4), Constraint (5), and the capacity constraint on knapsacks are clearly satisfied. Since , Observation 1 let us verify the capacity constraint on knapsacks .

Lemma 2 directly follows from the two opposite inclusions, that we show separately. While reading these proofs, Figure 3 can be useful to visualize the structure of the vectors involved in the proofs.

*Proof of Lemma 2. *First, we show that
*Proof. *Suppose . Let . The inclusion follows by two facts: (a) is -good; (b). Take . Since obeys the capacity constraint on knapsacks and for , by exploiting Observation 1 with , we get that satisfies the capacity constraint on the knapsack . Clearly, satisfies Constraint (5), being a substring of . Hence, is -good.

In order to show that , we take and we show that . By following the subproblems definition, it is simple to verify that obeys Constraint (10), (12), and (5) of . Moreover, since satisfies the capacity constraint for each , by applying Observation 2 with , we get that satisfies the capacity constraint over the knapsacks , and then obeys Constraint (11) too.

Second, we prove that

*Proof. *Take a such that is -good and a . We will show that . Constraint (10) and (12) of easily follow from subproblems definition. Moreover, since satisfies Constraint (5) and , then satisfies Constraint (5).

It remains to verify Constraint (11). Since is -good, then obeys the capacity constraint on the knapsack . By applying Observation 1 with , we derive that also obeys the capacity constraint on that knapsack. Moreover, since , then obeys the capacity constraint on the knapsacks . Since for , by Observation 2,โโ obeys the capacity constraint also for the knapsacks .

##### 3.3. The Base of the Recursion

We have two base cases. Observation 3 handles the case when , while Observation 4 treats the case when .

*Observation 3. *Consider , for all , and . Moreover, let . If and for each ,then . Otherwise, .

*Proof. *Clearly, since and by Constraint (10), there cannot exist a solution to different from . Moreover, notice that .

*Observation 4. *Consider , for all , and . If , then ,Otherwise .

*Proof. *Clearly, by Constraint (12), can be the only solution to .

#### 4. Complexity

In this section, we prove Lemma 3.

Lemma 3. *Let be a constant such that is bounded by when , and is bounded by when . The above algorithm takes time and space of .*

Clearly, our algorithm exploits a three-dimensional matrix for storing the values , for , , and . We also need a matrix of the same size which traces, for each subproblem , the subsequent subproblem used to compute . This makes us rebuild the optimum solution at the end. The space complexity of the algorithm then is . We need to rate the value of , but first let us to compute the time complexity.

In order to evaluate the base case of our dynamic programming algorithm, we first refer to Observation 3. Clearly, for , , because there must be at least insertion in a solution that starts with the signature . Moreover, since is the only feasible solution to , it is clear that, for each , , by Constraint (12). Therefore, we have to compute only for , and then, the number of base case subproblems to be computed is only . Since , is the time needed for computing both the profit of and , for an . Then, to check the capacity constraint on all the knapsacks , we need computations. Thus, the base case can be computed in time of . About the base case , as handled by Observation 4, note that if we codify the signatures (a such codify is given in Section 4.2), we can check the condition in . Regarding the general case, by (14), for solving a subproblem, we have to check if is -good, for . To check if satisfies the capacity constraint on the knapsack , and if , we spend computations. Since the number of subproblems is , we need time to fill the matrix . Moreover, by (13), we have to scan over the in order to find the maximum value of . Clearly, , but we need computations to check the capacity constraints on all knapsacks , because each signature has width . Thus, we spend computations, to find . Furthermore, rebuilding the best solution takes time. Therefore, the part in which we recursively compute the subproblems leads the complexity of the entire algorithm. It depends on the value of , as well as the space complexity. In Section 4.1, we give an estimate of this value. Moreover, in Section 4.2, we show an ordering of the set , that permits us to check the capacity constraint on a knapsack in constant time.

##### 4.1. Estimating

In the case where is a constant, we can directly estimate .

*Observation 5. *If is a constant, then .

*Proof. *Clearly, , because the number of binary strings is . Moreover, when , we supposed constant. When , since , we get that , with constant and .

For nonconstant values of , let us find a general form for . Let denote the number of binary strings such that obeys Constraint (5). Note that Constraint (5) contains the parameter . When , we have places where to insert, and at most one insertion is possible by Constraint (5). Moreover, we have to count the string with no insertions. Thus, we get that for all . For greater values of , we refer to the recursion shown in Figure 4. If the first bit of is 0, the choice of the following bits is not influenced. Therefore, it is enough to find the number of strings such that obeys Constraint (5), that is exactly . If the first bit is 1, by Constraint (5), the following bits are necessarily 0โs. In this case, we continue to choose after the -th bit. Thus, we have ways to choose the remaining bits. Therefore, we can express by the recurrence equation:

Lemma 4 gives a general estimate of the recurrence , in order to bound for nonconstant values of .

Lemma 4. *Let . For each , .*

*Proof. *We prove the claim by induction on , and we postpone to the appendix the proof of the base of the induction, that is, the case . For , we have the step of induction. Clearly,
Hence, it is sufficient to show that or equivalently . Since , it remains to show that .

We know that for each real . By substituting with and noticing that , we get that .

Since , we can apply Lemma 4 to deduce that , as for . Therefore, when is bounded by a constant , we get .

##### 4.2. Ranking the Set

We use Recurrence (20) to define a function , which provides a unique index for each signature, and hence, it gives a ranking for the set .

*Definition 5. *For each ,

Note that is the number of signatures having length , and it is equivalent to the number of signatures of length that start with a 0. Hence, as illustrated in Figure 5, in the step of the sum where , we intend to place the signatures with after all the signatures with , that are exactly . For , we do recursively the same, locating substrings of length .

Conversely, given an integer , the unranking procedure is the following. Take . For , if , set and .

Evidently, in order to efficiently perform such ordering on the set , we need to compute and store the recurrence , for , at the beginning of the algorithm. This takes time and space, whereas the ranking and unranking operations take time.

Indeed, we can avoid to encode and decode the signatures for the computation of each subproblem. This can be done by initializing a table at the beginning of the algorithm, that stores, for each position relative to a signature , a list of the bits that are changed from the previous signature , that is, the signature having . It is easy to verify the following procedure finds the next signature from the previous one (it works similarly to the function that increments a binary counter, but considering Constraint (5)). (i)Scan the previous string starting from the least significant bit (right most) and find the first range of consecutive 0โs, or a range of consecutive 0โs that includes the most significant bit (left most). (ii)If such a range exists, the next string is obtained by setting to 1 the right most bit of the range, and by setting to 0 all the bits at the right of the range. (iii)If such a range does not exist, is the last signature.

Above all, observe that, by Constraint (5), there are at most insertions in a signature , and we know that . Therefore, for every kind of ranking for the set , the number of the bits changing between two adjacent signatures is . Thus, if we know the changing bits from a signature to the next , we can use an incremental approach for computing the value , from , in constant time. This allows us to check in constant time the capacity constraints of the -goodness (14), those regarding the definition of the set (13), and when computing the base case (Observation 3). Note that also Constraint (5) of the -goodness can be computed in with the same technique.

Moreover, note that when computing the matrix , if we place the cycle on the variable , externally to the cycle on the variable , we can clearly find the next signature and check the capacity constraints only once every subproblems. In this way, the cost of finding the next signature is made inessential.

Thus, we can conclude that our algorithm has time and space complexity of .

#### 5. Memory Saving Version

Consider (14). Given an , in order to compute for each and , we need only the elements having of the matrix . Moreover, by (13), only the elements with are required for computing . Thus, in order to work out the profit of a best solution, we only need space. Unfortunately, this simplification does not apply to the matrix used to compute the best solution.

In [6], Hirschberg showed an elegant and practical space reduction method for the longest common subsequence problem, which works for many dynamic programming algorithms (well exposed also in [7]). In general, this method allows to compute an optimal solution, taking as much space and time as if we had only to compute the optimal solution value. This is accomplished by exploiting the equation which handles the recursion in the original algorithm (in our case (14)). Its space policy exploits the space improvement mentioned at the beginning of this section.

Conceptually, the basic idea of the method is to halve a dimension of the dynamic programming matrix and find how the best solution is divided in the other dimensions. This permits the two halves obtained to be solved separately and recursively in the same way. In order to apply this method to our algorithm, we follow the next steps. (i)We halve the knapsack array. (ii)We find how many insertions of the best solution are placed in each half of the knapsack array. (iii)We locate a number of insertions placed around the middle of the knapsack array. This allows us to break up the IKHO problem in two independent subproblems, which are then solved recursively.

Notice that, in the last sentence, the word *subproblems* does not refer to the subproblems defined in Section 3.

In Section 5.1, we implement this idea. In Section 5.2, we show that the new defined algorithm decreases the space complexity to , without increasing the time complexity.

##### 5.1. The Algorithm

In the following, we write when for each and for each , that is, when โโ*ends with* the signature in the knapsacks .

Given a natural , a natural , and a signature , we consider a modified problem whose solutions are those which obey The subproblems are simply the symmetrical transposition of the subproblems . The only significant difference is that we check the capacity constraints in the range , that is, not symmetrical to the range . This dissimilarity is caused by the fact that also the radiations produced by an insertion are not symmetrical too (see (9)).

We denote by the space of the feasible solutions to the modified problem . Moreover, is the maximum profit of a solution in when is not empty, and when is empty. Notice that, since the subproblems and are symmetrical, the properties proved for hold symmetrically for , bringing some adjustments due to the fact that the radiations are not exactly symmetrical. Thus, for computing the matrix , we take the same space and time needed for computing the matrix , that is, space and time.

In the following, let to be the number of insertions in a signature . For each , and for , let . Clearly, this function holds for each knapsack the profit caused by placing the signature in the knapsacks . Moreover, for , and , let be the substring of composed by the elements in the range (we assume is empty when ). Furthermore, for , for each signature , for and such that , let . This new operator defines a new space of solutions given by the concatenation of the feasible solutions of two symmetrical subproblems. Notice that the signature represents the joining point when concatenating the two strings. This situation is represented in Figure 6, which is also useful to visualize the proof of Lemma 6, which represents the main innovation on our algorithm.

Lemma 6. *Let , and .**Then,
**
Indeed,
*

*Proof. *In order to show that , take . Moreover, take , , , and . In the following, we prove that and . Obviously, and . For which concerning the number of insertions, it is clear that satisfies Constraint (12) of , as . Moreover, note that , because , and . Thus, , and then satisfies Constraint (12) of . About the capacity constraints, in order to show that satisfies them on the knapsacks , we can apply Observation 2 with . Conversely, applying Observation 1 with , we obtain that satisfies the capacity constraint on the knapsacks .

To prove the converse inclusion, take , such that , , and . We prove that . Clearly, since and , we get that , thus satisfies Constraint (12). Moreover, since , it is simple to verify that satisfies also Constraint (5). Furthermore, exploiting Observation 2 with , and Observation 1 with , we get that satisfies the capacity constraint on all the knapsacks .

Let and be the values of and that maximize (24). Moreover, let be a best solution for an IKHO instance . Clearly, the signature represents a piece of , that is, . In addition, note that determines a distribution of the best solution insertions, that is, and . If we do not consider the insertions given by the signature , we obtain and , as described in Figure 7.

Having fixed the middle elements of has an important consequence. The radiations of weight starting from the insertions at the left of cannot interfere with the radiations coming from the insertions placed at the right of , as shown in Figure 8. In particular, by (9), the right insertions affect only the range , while the left insertions interest only the knapsacks .

Therefore, in order to compute the entire part of which stands at the right of , it is enough to know , because the checks on the capacity constraints are independent from the insertions at the left of . The same clearly holds also for finding the left piece of . Thus, we subdivided the main problem in two independent subproblems, as follows:(i)to find the best solution in the knapsacks , obeying the capacity constraints over the knapsacks , having at most insertions and knowing that ; (ii)to find the best solution in the knapsacks , obeying the capacity constraints over the knapsacks , having at most insertions and knowing that .

The above-obtained subproblems are solvable as an IKHO instance, by recursively applying (24) with some adjustments. The only significant difference between the main call and the recursive calls is that, in the second ones, when checking the capacity constraints, we have to consider the insertions given by the previously fixed . Indeed, we can simplify this task by progressively updating the vector of capacities , by subtracting for each . For each step of recursion, the new vector will be passed to the input of the lower level subproblems. The solution of each subproblem fixes over a signature , so as to recursively cover with substrings of length . This is visualized in Figure 9. Finally, note that the base cases for this recursion are the subproblems which involve a number of knapsacks lower than . In order to compute the best solution for them, we simply list all the feasible solutions and compare the profits.

##### 5.2. Complexity

As mentioned at the beginning of this section, in the first call (root node of the recursion tree), the computing of (24) takes space. The recursive calls occupy geometrically less memory than the first call, as they deal with an halvened number of knapsacks (and with a lower value of ), whence the total memory consumption is of the same order as the memory consumption for the sole first call. Note that, for each step of recursion, once we found and , we can deallocate the matrices and . Besides these matrices, we need space for dynamically compose the best solution . Therefore, the new algorithm takes only space.

Let us now analyze its time complexity. In the first call, for the computation of (24), the algorithm spends time to compute the matrices and , and time to pick out the values and . It also spends time in order to update the vector by subtracting the radiations of weight given by . In fact, a signature has width , and the range of influence of an insertion is . To give an estimate for the subproblems computation, we need to study how the two terms found, and , propagate on the next levels of calls. Moreover, let us write them as and , for some constants .

First, note that the recursive calling scheme for (24) can be approximated with a binary tree of height . Indeed, since the calls on an input of knapsacks are treated as leaf cases, the height of the binary tree corresponds to the first integer such that , that is, . Moreover, it is simple to find that such a binary tree has nodes. In fact, since for each level , we have at most nodes, the total number of nodes is Therefore, considering all the calls of (24), for the first term, we obtain .

Note that, when we call the two children of a node, both the knapsack array dimension and the dimension are subdivided. In particular, we call the children resolution on two half of the knapsacks array ( and ), and taking parameters and such that . Thus, for each level , and each node , the second term of complexity is , such that . Therefore, for the second term, the complexity for each entire level , is clearly . Moreover, by adding up each level, we get that the total complexity for the second term is Since , the second term always bounds the first . Therefore, the complexity of the nodes computation is .

Regarding the computation of the leaf cases, note that we have to find the best feasible piece of solution, for an input of at most knapsacks. Since these pieces have length lower than , and they must obey Constraint (5), they are at most . Moreover, we have to consider the cost of checking the capacity constraints on knapsacks. Thus, a leaf case can be solved in time. Since we have leaf cases, the total cost of the leaf cases is , that is not higher than the complexity needed for computing the nodes. Therefore, the memory-saving version of the algorithm has time complexity of , which is the same as for the base version.

#### 6. Conclusions

In [3], Aho exhibited a few classes of polynomial instances for the IKHO problem, motivated by the fact that IKHO is NP-complete [1] and APX-hard [2]. The most important class of instances identified by Aho is represented by the instances where , for a constant . Throughout Sections 3 and 4, we identified a new and wide class of instances allowing for a polynomial time algorithm. We achieved this by showing how to build a dynamic programming algorithm which executes in time of and takes space of for the instances where is bounded by a constant . These results represent a significant improvement to the understanding of the IKHO problem and more in general for the Interactive Knapsacks problems to which IKHO belongs and that are extensively presented by Aho in [4]. Note that our results imply Aho's results for IKHO, as shown in Section 2.2. In Section 5, we also exploited Hirschberg's approach in order to create a memory saving version of our algorithm which decreases the space complexity to without increasing the time complexity.

Extensive experimental evaluations have been performed on the c++ implementations of both algorithms, confirming the complexity estimates given in Sections 4 and 5. In addition, these experiments pointed out that, despite the heavy constants introduced in the time complexity bound of the memory saving version, the last version of the algorithm is often faster than the base version in the practice. This is mainly due to the fact that the higher memory usage of the base version brings the operative system to allocate data in the slowest memory devices like the RAM and the hard disk, instead of using the CPU cache.

#### Appendix

#### Proving the Base of Lemma 4

In this appendix, we prove the base of the induction of Lemma 4. We have to show that, assuming , for .

In the following, let First, notice that for . Indeed, for , we can compute by simply counting the number of signatures of length , and then verify that this number is exactly . By Constraint (5), in a signature of length , or smaller than , there can be at most two insertions. Thus, since , we can count the signatures of length , by grouping them according to the number of insertions (no one, one, or two). Obviously, only the signature has no insertions. Moreover, there are exactly ways to place one insertion. For the case where we have two insertions, note that the first one can be only in the first positions, because if we place it on the -th position, by Constraint (5), there cannot be later insertions in a signature of length . Moreover, notice that if we fix the first insertion in position , then we have possible places where to put the second one, again by Constraint (5). Therefore, the number of signatures with two insertions is clearly

So for , as anticipated. Therefore, in order to prove the base of the induction of Lemma 4, it is enough to show that , with .

Clearly, for , we get . Our plan is then to observe that holds for every , when . Indeed, , whereas . When , we get , and since , That is, . Moreover, it is simple to see that is linear, whereas is an exponential on the variable , and, therefore, for every , as anticipated.

#### References

- Isto Aho, โInteractive Knapsacks,โ
*Fundamenta Informaticae*, vol. 44, no. 1-2, pp. 1โ23, 2000. View at Google Scholar - Isto Aho, โOn the approximability of interactive knapsacks problems,โ in
*Proceedings of the 28th Annual Conference on Current Trends in Theory and Practice of Informatics (SOFSEM '01)*, vol. 2234 of*Lecture Notes in Computer Science*, pp. 152โ159, Piešt’any, Slovak Republic, November/December 2001. View at Publisher ยท View at Google Scholar - Isto Aho, โNew polynomial-time instances to various knapsack-type problems,โ
*Fundamenta Informaticae*, vol. 53, no. 3-4, pp. 199โ228, 2002. View at Google Scholar ยท View at Scopus - Isto Aho,
*Interactive Knapsacks: Theory and Application*, A-2002-13, University of Tampere, 2002. - E. Y.-H. Lin, โA bibliographical survey on some well-known non-standard knapsack problems,โ
*INFOR*, vol. 36, no. 4, pp. 274โ317, 1998. View at Google Scholar ยท View at Scopus - D. S. Hirschberg, โAlgorithms for the longest common subsequence problem,โ
*Journal of the ACM*, vol. 24, no. 4, pp. 664โ675, 1977. View at Publisher ยท View at Google Scholar - D. Gusfield,
*Algorithms on Strings, Trees, and Sequences*, Cambridge University Press, Cambridge, UK, 1997.