Table of Contents
ISRN Electronics
Volumeย 2012, Article IDย 859820, 10 pages
http://dx.doi.org/10.5402/2012/859820
Research Article

Polynomial Time Instances for the IKHO Problem

1Department of Computer Science, University of Verona, 37134 Verona, Italy
2Department of Information Engineering and Computer Science, University of Trento, 38123 Povo, Italy

Received 20 January 2012; Accepted 7 February 2012

Academic Editors: C. W.ย Chiou and T. L.ย Kunii

Copyright ยฉ 2012 Romeo Rizzi and Luca Nardin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The Interactive Knapsacks Heuristic Optimization (IKHO) problem is a particular knapsacks model in which, given an array of knapsacks, every insertion in a knapsack affects also the other knapsacks, in terms of weight and profit. The IKHO model was introduced by Isto Aho to model instances of the load clipping problem. The IKHO problem is known to be APX-hard and, motivated by this negative fact, Aho exhibited a few classes of polynomial instances for the IKHO problem. These instances were obtained by limiting the ranges of two structural parameters, c and u, which describe the extent to which an insertion in a knapsack in uences the nearby knapsacks. We identify a new and broad class of instances allowing for a polynomial time algorithm. More precisely, we show that the restriction of IKHO to instances where (๐‘+2๐‘ข)/๐‘ is bounded by a constant can be solved in polynomial time, using dynamic programming.

1. Introduction

Interactive Knapsacks Heuristic Optimization problem (IKHO) is a particular knapsacks model in which, given an array of knapsacks, an insertion in a knapsack influences the nearest knapsacks, in terms both of weight and of profit. It was introduced by Aho in [1], for solving the load clipping problem arising in electricity management application. It belongs to the general framework of the Interactive Knapsacks problems (IK) (also defined in [1]) which has several other applications, for example, in electricity management, single and multiprocessor scheduling, and packing of ๐‘›-dimensional items to different knapsacks. Since IKHO is NP-complete [1] and APX-hard [2], the research of polynomial time instances is very important. In [3], Aho introduces a few classes of such instances restricting the values of certain parameters of the problem: ๐‘ and ๐‘ข, which determine the dimension of the influence on other knapsacks caused by an insertion, and ๐พ, that limits the number of insertions. We keep on this line of investigation by adding a wide and significant class of polynomial time instances for the IKHO problem in the case when (๐‘+2๐‘ข)/๐‘ is bounded.

Intuitively, in IKHO, when we insert an item in a knapsack, this item is replicated (cloned) to the ๐‘ next knapsacks (hence forming a cloning block over ๐‘+1 consecutive knapsacks), and it causes an arbitrary but predetermined modification (radiation) of the weight and profit of the knapsacks at distance at most ๐‘ข from the cloning block (on both sides of the cloning block). After a knapsack is involved in a cloning operation, we are not allowed to insert any other item in that knapsack. Therefore, the cloning blocks are disjoint. In this paper, we are mainly interested in the case where the ratio between the whole width ๐‘+2๐‘ข of the influenced zone (cloning plus radiation zones) and the width ๐‘ of the cloning part is bounded by a constant ๐‘Ÿ. We propose a dynamic programming algorithm based on a matrix of size ๐‘‚(๐‘๐‘Ÿร—๐พ), and having time complexity of ๐‘‚(๐‘๐‘Ÿร—๐‘šร—๐พ), where ๐‘š is the number of knapsacks and ๐พ represents the maximum number of cloning block that we can insert in the knapsacks array.

In Section 2, we give the original formulation of the problem from [1] and we then simplify it to ease our exposition in later sections. In Section 3, we give the algorithm. In Section 4, we sharpen the complexity result. Finally, in Section 5, we design a memory saving version of that algorithm.

We conclude this section by defining some useful notation. Henceforth, we write ๐ŸŽ๐‘š to denote a zero constant vector of length ๐‘š, that is, ๐ŸŽ๐‘š๐‘–=0 for ๐‘–=1,โ€ฆ,๐‘š. Moreover, if ๐‘ฅ,๐‘ฆโˆˆ{0,1}๐‘š, we write ๐‘ฅโ‹…๐‘ฆ to indicate the concatenation of the two binary strings. Furthermore, [๐‘Ž,๐‘] always denotes a range of integers and, if ๐‘Ž>๐‘, we assume that [๐‘Ž,๐‘] is empty. In the same way, if ๐‘Ž>๐‘, the notation for ๐‘–=๐‘Ž,โ€ฆ,๐‘ means for no ๐‘–.

2. Formulation of the IKHO Problem

We are given an array of ๐‘š knapsacks, each one of capacity ๐‘โ„“, for โ„“=1,โ€ฆ,๐‘š. There is a single item that we are asked to insert at most ๐พ times in the knapsacks array, where ๐พ is a natural given as part of the input. The profit and weight of an insertion depend on the knapsack in which we insert for ๐‘–=1,โ€ฆ,๐‘š the naturals ๐‘ค๐‘– and ๐‘๐‘– represent the weight and the profit of an insertion in the knapsack ๐‘–. The main feature ofโ€‰โ€‰IK problems is that every insertion has also an influence on the weight and profit of the nearby knapsacks. In this way, the weight charged on and the profit relative to a knapsack are established by the insertions in all the knapsacks. To describe this mechanism, Aho introduces a function (called interactive function) ๐ผ๐‘– for each knapsack ๐‘–=1,โ€ฆ,๐‘š, that determines the interaction from a knapsack ๐‘– to every other knapsack. In particular, given naturals ๐‘ and ๐‘ข, for each knapsack ๐‘–, we know that ๐ผ๐‘–(โ„“)=1forknapsacksโ„“โˆˆ[๐‘–,๐‘–+๐‘],๐ผ๐‘–(โ„“)โˆˆโ„šarbitraryforknapsacksโ„“โˆˆ[๐‘–โˆ’๐‘ข,๐‘–โˆ’1]โˆช[๐‘–+๐‘+1,๐‘–+๐‘+๐‘ข],๐ผ๐‘–(โ„“)=0forallotherknapsacks.(1) The range [๐‘–,๐‘–+๐‘] is called cloning block and the range [๐‘–โˆ’๐‘ข,๐‘–โˆ’1]โˆช[๐‘–+๐‘+1,๐‘–+๐‘+๐‘ข] โ€‰radiation part. The role of the functions ๐ผ๐‘– gets clear in formulas (2)โ€“(5).

The decision variables used to denote in which knapsacks we do the insertions are ๐‘ฅ๐‘–โˆˆ{0,1} for ๐‘–=1,โ€ฆ,๐‘š. Given in input ๐‘š,๐‘๐‘–,๐‘ค๐‘–,๐‘๐‘–,๐‘,๐‘ขโˆˆโ„• and ๐พโˆˆโ„•โงต{0}, an IKHO problem is to maximize๐‘š๎“๐‘–=1๐‘ฅ๐‘–๐‘š๎“โ„“=1๐ผ๐‘–(โ„“)๐‘โ„“(2)subjectto๐‘š๎“๐‘–=1๐‘ฅ๐‘–๐ผ๐‘–(โ„“)๐‘ค๐‘–โ‰ค๐‘โ„“,forโ„“=1,โ€ฆ,๐‘š,(3)๐‘š๎“๐‘–=1๐‘ฅ๐‘–โ‰ค๐พ,(4)๐‘ฅ๐‘—=0,for๐‘–<๐‘—โ‰ค๐‘–+๐‘,when๐‘ฅ๐‘–=1,(5) where ๐‘–=1,โ€ฆ,๐‘š in (5). Clearly, since a feasible ๐‘ฅ must belong to {0,1}๐‘š, then at most one item may be put in each single knapsack. Moreover, notice that in Constraint (3), imposing that the knapsacks are not overfilled, ๐ผ๐‘–(โ„“) is multiplied by the weight ๐‘ค๐‘–. Thus, when we insert in the knapsack ๐‘–(๐‘ฅ๐‘–=1), the weight ๐‘ค๐‘– is equivalently charged on the knapsacks โ„“โˆˆ[๐‘–,๐‘–+๐‘], for which ๐ผ๐‘–(โ„“)=1. This is the reason of calling the range [๐‘–,๐‘–+๐‘] cloning block. Regarding the knapsacks in the radiation part [๐‘–โˆ’๐‘ข,๐‘–โˆ’1]โˆช[๐‘–+๐‘+1,๐‘–+๐‘+๐‘ข], we get that an arbitrary portion of the weight ๐‘ค๐‘– is added or subtracted (since ๐ผ๐‘–(โ„“) can be negative) to them. Similar operations are performed in the maximization function (2) with the profits ๐‘โ„“. Furthermore, Constraint (4) specifies the maximum number of cloning blocks to be put into the knapsacks array, while Constraint (5) tells that the cloning blocks must be disjoint. The IKHO model is more widely explained and motivated in [1, 4].

2.1. Simplifying the Formulation

Our first step here is to simplify the formulation of the problem by making the notion of weight independent from the interaction functions, and the profit dependent only on the knapsack where we insert. This is accomplished by exploiting the transformation proposed by Aho in [2], in order to reduce IKHO to MDKP, an IPL formulation surveyed in [5]. We define ๐‘๎…ž๐‘–=๐‘š๎“โ„“=1๐ผ๐‘–(โ„“)๐‘โ„“,๐‘ค๎…ž๐‘–โ„“=๐ผ๐‘–(โ„“)๐‘ค๐‘–,(6) so that ๐‘๎…ž๐‘– represents the total profit of an insertion in the knapsack ๐‘–, and ๐‘ค๎…ž๐‘–โ„“ is the weight that an insertion in the knapsack ๐‘– charges over the knapsack โ„“. From the features of ๐ผ๐‘–(โ„“) exposed in (1), it follows that ๐‘ค๎…ž๐‘–โ„“ and ๐‘๎…ž๐‘– are both rational numbers. Thus, notice that they can be also negative.

Now, we can reformulate the problem as follows: maximize๐‘š๎“๐‘–=1๐‘ฅ๐‘–๐‘๎…ž๐‘–(7)subjectto๐‘š๎“๐‘–=1๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“โ‰ค๐‘โ„“,forโ„“=1,โ€ฆ,๐‘š,(4)and(5).(8) Henceforth we refer always to the latter formulation of the problem, since it simplifies the description of the algorithm.

Let us restate the behavior of parameter ๐‘ค๎…ž as inherited by functions ๐ผ๐‘–. For ๐‘–=1,โ€ฆ,๐‘š, we have that ๐‘ค๎…ž๐‘–โ„“=๐‘ค๐‘–(andthenitisconstant)forโ„“โˆˆ[๐‘–,๐‘–+๐‘],๐‘ค๎…ž๐‘–โ„“isarbitraryforโ„“โˆˆ[๐‘–โˆ’๐‘ข,๐‘–โˆ’1]โˆช[๐‘–+๐‘+1,๐‘–+๐‘+๐‘ข],๐‘ค๎…ž๐‘–โ„“=0forallotherโ„“โˆˆ[1,โ€ฆ,๐‘š].(9)

2.2. Polynomial Time Instances

The classes of instances isolated by Aho are the following: (a) the instances with ๐‘=๐‘ข=0; (b)those with ๐พ=๐‘‚(1); (c)those with ๐‘ข=0; (d)those with ๐‘+2๐‘ข+1=๐‘‚(log(๐‘š๐›ผ)), for a constant ๐›ผ.

The restriction of IKHO obtained by considering only the instances in (a) corresponds to the situation in which there are no interactions, whence the decision on whether to insert an item can be taken independently on each knapsack. As for (b), notice that any instance of IKHO admits at most ๐‘š๐พ feasible solutions, which is only a polynomial number of possibilities whenever ๐พ=๐‘‚(1). We refer to [3] for details on Aho's algorithm for instances of type (c) and (d).

In Section 3, we describe an algorithm for IKHO that has time complexity of ๐‘‚(๐‘šร—๐พร—๐‘(๐‘+2๐‘ข)/๐‘). When (๐‘+2๐‘ข)/๐‘ is bounded by a constant, it clearly becomes a polynomial time algorithm. Indeed, note that the term ๐‘(๐‘+2๐‘ข)/๐‘ is polynomial also when ๐‘+2๐‘ข=๐‘‚(ln(๐‘š๐›ผ)), for a constant ๐›ผ. In fact, ๐‘(๐‘+2๐‘ข)/๐‘=(๐‘1/๐‘)๐‘+2๐‘ข, and ๐‘1/๐‘ is a decreasing function on ๐‘. Therefore, our results imply those reported in (a), (c), and (d).

3. The Algorithm

In the following, ๐ด=(๐‘š,๐‘,๐‘,๐‘ข,๐‘ค๎…ž,๐‘๎…ž,๐พ) always denotes the input IKHO instance. Let ๐ฟโˆถ=๐‘+2๐‘ข. A binary string ๐‘ โˆˆ{0,1}๐ฟ is called a signature if it obeys Constraint (5), that is, if ๐‘ ๐‘–+๐‘ ๐‘—โ‰ค1 for all ๐‘–,๐‘—โˆˆ[1,๐ฟ] such that 1โ‰ค|๐‘–โˆ’๐‘—|โ‰ค๐‘. We denote by ๐’ฎ the set of all signatures.

Given a solution ๐‘ฅโˆˆ{0,1}๐‘š, |๐‘ฅ|โˆถ=โˆ‘๐‘š๐‘–=1๐‘ฅ๐‘– denotes the number of insertions prescribed by ๐‘ฅ. Moreover, for each โ„“โˆˆ[1,๐‘š], ๐‘ค(๐‘ฅ,โ„“)โˆถ=โˆ‘๐‘š๐‘–=1๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“ is the weight charged on the knapsack โ„“ by the solution ๐‘ฅ. Then, we say that a solution ๐‘ฅ obeys the capacity constraint (Constraint (8)) on the knapsacks [๐›ผ,๐›ฝ] if and only if ๐‘ค(๐‘ฅ,โ„“)โ‰ค๐‘โ„“ for each โ„“โˆˆ[๐›ผ,๐›ฝ]. Furthermore, we write ๐‘ฅโŠข(๐‘ ,โ„Ž) when ๐‘ฅโ„Ž+๐‘—=๐‘ ๐‘— for each ๐‘—โˆˆ[1,๐ฟ] and ๐‘ฅ๐‘–=0 for each ๐‘–โˆˆ[1,โ„Ž], that is, when ๐‘ฅโ€‰โ€‰starts with the signature ๐‘  in the knapsacks [โ„Ž+1,โ„Ž+๐ฟ], whence having the form ๐‘ฅ=0โ‹ฏ0๎„ฟ๎…ƒ๎…Œโ„Ž๐‘ ๐‘ฅโ€ฒ.

3.1. The Subproblems of Our DP Approach

Given a natural ๐‘˜โ‰ค๐พ, a natural โ„Žโ‰ค๐‘šโˆ’๐ฟ, and a signature ๐‘ , we consider a modified problem Sub[๐‘˜,โ„Ž,๐‘ ], whose solutions are those ๐‘ฅโˆˆ{0,1}๐‘š which obey ๐‘ฅโŠข(๐‘ ,โ„Ž),(10)๐‘ค(๐‘ฅ,โ„“)โ‰ค๐‘โ„“โˆ€โ„“โˆˆ[โ„Ž+๐‘+๐‘ข+1,๐‘š],(11)|๐‘ฅ|โ‰ค๐‘˜,and(5).(12) The objective function is the same as in the IKHO formulation. The differences between the IKHO problem and the above-defined subproblems are in the additional parameters ๐‘ , โ„Ž, ๐‘˜ and their use in the constraints. (i)Constraint (10) fixes the first insertions in compliance to the signature ๐‘ . (ii)The range on which we check the capacity constraint in (11) is [โ„Ž+๐‘+๐‘ข+1,๐‘š], a subset of the range [1,๐‘š] checked in IKHO. (iii)By Constraint (12), we can do at most ๐‘˜ insertions. Notice that in general (12) is more restrictive than (4).

In the following, we denote by ๐’ณ[๐ด] the space of solutions to the IKHO instance ๐ด, and by ๐’ณ[๐‘˜,โ„Ž,๐‘ ] the space of solutions to the modified problem Sub[๐‘˜,โ„Ž,๐‘ ]. Moreover, let opt[๐ด] be the maximum value of a solution in ๐’ณ[๐ด] and opt[๐‘˜,โ„Ž,๐‘ ] the maximum value of a solution in ๐’ณ[๐‘˜,โ„Ž,๐‘ ]. It is assumed that opt[๐‘˜,โ„Ž,๐‘ ]=โˆ’โˆž when ๐’ณ[๐‘˜,โ„Ž,๐‘ ] is empty.

3.2. The Dynamic Programming Algorithm

Our dynamic programming approach is based on Lemmas 1 and 2, whose proofs are given later in this subsection. In particular, Lemma 1 shows how to read out an optimal solution to the IKHO instance, from the optimal solutions to the subproblems.

Lemma 1. Let ๐’ฎ0โŠ†๐’ฎ be the set of the signatures ๐‘  such that ๐‘ฆ=๐‘ โ‹…๐ŸŽ๐‘šโˆ’๐ฟ obeys the capacity constraint on all knapsacks โ„“โˆˆ[1,๐‘+๐‘ข]. Then, opt[๐ด]=max๐‘ โˆˆ๐’ฎ0opt[๐พ,0,๐‘ ].(13) Indeed, ๐’ณ[๐ด]=โ‹ƒ๐‘ โˆˆ๐’ฎ0๐’ณ[๐พ,0,๐‘ ].

Lemma 2 explains how to recursively solve the subproblems. We need some additional notation. Given ๐‘ฅโˆˆ{0,1}๐‘š, ๐‘โˆˆ{0,1}, for ๐‘–=1,โ€ฆ,๐‘š, we write ๐‘ฅ+(๐‘,๐‘–) to denote the binary string obtained from ๐‘ฅ by setting its ๐‘–-th element to ๐‘. Moreover, if ๐‘‹โŠ†{0,1}๐‘š, we let ๐‘‹+(๐‘,๐‘–)โˆถ={๐‘ฅ+(๐‘,๐‘–)โˆถ๐‘ฅโˆˆ๐‘‹}. Furthermore, for each ๐‘โˆˆ{0,1}, and โ„Žโˆˆ[0,๐‘šโˆ’๐ฟ], a signature ๐‘ โˆˆ๐’ฎ is called (โ„Ž,๐‘)-good if ๐‘งโˆถ=๐ŸŽโ„Žโ‹…๐‘ โ‹…๐‘โ‹…๐ŸŽ๐‘šโˆ’โ„Žโˆ’๐ฟโˆ’1 obeys the capacity constraint for the knapsack โ„Ž+๐‘+๐‘ข+1 and if ๐‘ ๎…žโˆถ=(๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘)โˆˆ๐’ฎ.

Lemma 2. For ๐‘˜=1,โ€ฆ,๐พ, โ„Ž=0,โ€ฆ,๐‘šโˆ’๐ฟโˆ’1, ๐‘ โˆˆ๐’ฎ, opt[๐‘˜,โ„Ž,๐‘ ]=max๐‘โˆˆ{0,1}โˆถ๐‘ is(โ„Ž,๐‘)-goodoptร—๎€บ๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…žโˆถ=๎€ท๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘๎€ธ๎€ป+๐‘ 1๐‘๎…žโ„Ž+1.(14) Indeed, ๐’ณ[๐‘˜,โ„Ž,๐‘ ]=๎š๐‘โˆˆ{0,1}โˆถ๐‘ is(โ„Ž,๐‘)-good๐’ณ๎€บ๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…žโˆถ=๎€ท๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘๎€ธ๎€ป+๎€ท๐‘ 1,โ„Ž+1๎€ธ.(15)

The base for the recursion, that is, the cases where โ„Ž=๐‘šโˆ’๐ฟ and ๐‘˜=0, is handled in Section 3.3.

In order to prove Lemmas 1 and 2, let us begin by pointing out some basic facts that directly derive from the IKHO formulation. Observations 1 and 2 play an important role in the formal proofs of these lemmas. For this reason, these observations and their proofs are visualized in Figures 1 and 2, respectively.

859820.fig.001
Figure 1: Representing Observation 1. The gray area indicates where ๐‘ฅ is equal to ๐‘ฆ, that is, the knapsacks [1,๐‘Ž]. The arrow represents the left most possible radiation as starting from the left most possible different bit. Since this radiation cannot reach the knapsacks [1,๐‘Žโˆ’๐‘ข], then, for โ„“โˆˆ[1,๐‘Žโˆ’๐‘ข], ๐‘ค(๐‘ฅ,โ„“) and ๐‘ค(๐‘ฆ,โ„“) cannot differ as they depend only on the common bits [1,๐‘Ž].
859820.fig.002
Figure 2: Representing Observation 2.

Observation 1. Assume ๐‘Ž>๐‘ข. Let ๐‘ฅ,๐‘ฆโˆˆ{0,1}๐‘š such that ๐‘ฅ๐‘–=๐‘ฆ๐‘– for ๐‘–=1,โ€ฆ,๐‘Ž. Then, for each โ„“โ‰ค๐‘Žโˆ’๐‘ข, ๐‘ฅ satisfies the capacity constraint if and only if ๐‘ฆ satisfies it. Indeed, for each โ„“โ‰ค๐‘Žโˆ’๐‘ข, ๐‘ค(๐‘ฅ,โ„“)=๐‘ค(๐‘ฆ,โ„“).

Proof. Let โ„“โ‰ค๐‘Žโˆ’๐‘ข. Remember that ๐‘ค(๐‘ฅ,โ„“)โˆถ=โˆ‘๐‘š๐‘–=1๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“. However, by (9), ๐‘ค๎…ž๐‘–โ„“=0 for โ„“<๐‘–โˆ’๐‘ข, that is, when ๐‘–>๐‘™+๐‘ข. Moreover, ๐‘ฅ๐‘–=๐‘ฆ๐‘– for ๐‘–=1,โ€ฆ,โ„“+๐‘ข, since โ„“+๐‘ขโ‰ค๐‘Ž. Therefore, ๐‘ค(๐‘ฅ,โ„“)โˆถ=๐‘š๎“๐‘–=1๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“=โ„“+๐‘ข๎“๐‘–=1๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“=โ„“+๐‘ข๎“๐‘–=1๐‘ฆ๐‘–๐‘ค๎…ž๐‘–โ„“=๐‘š๎“๐‘–=1๐‘ฆ๐‘–๐‘ค๎…ž๐‘–โ„“=โˆถ๐‘ค(๐‘ฆ,โ„“).(16)

Observation 2 covers the left/right-reverse situation.

Observation 2. Assume ๐‘Žโ‰ค๐‘šโˆ’๐‘โˆ’๐‘ข. Let ๐‘ฅ,๐‘ฆโˆˆ{0,1}๐‘š such that ๐‘ฅ๐‘–=๐‘ฆ๐‘– for each ๐‘–=๐‘Ž,โ€ฆ,๐‘š. Then, for each โ„“โ‰ฅ๐‘Ž+๐‘+๐‘ข, ๐‘ฅ satisfies the capacity constraint if and only if ๐‘ฆ satisfies it. Indeed, for each โ„“โ‰ฅ๐‘Ž+๐‘+๐‘ข, ๐‘ค(๐‘ฅ,โ„“)=๐‘ค(๐‘ฆ,โ„“).

Proof. Let โ„“โ‰ฅ๐‘Ž+๐‘+๐‘ข. By (9), ๐‘ค๎…ž๐‘–โ„“=0 for โ„“>๐‘–+๐‘+๐‘ข, that is, when ๐‘–<๐‘™โˆ’๐‘โˆ’๐‘ข. Moreover, ๐‘ฅ๐‘–=๐‘ฆ๐‘– for ๐‘–=โ„“โˆ’๐‘โˆ’๐‘ข,โ€ฆ,๐‘š, since โ„“โˆ’๐‘โˆ’๐‘ขโ‰ฅ๐‘Ž. Therefore, ๐‘ค(๐‘ฅ,โ„“)โˆถ=๐‘š๎“๐‘–=1๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“=๐‘š๎“๐‘–=โ„“โˆ’๐‘โˆ’๐‘ข๐‘ฅ๐‘–๐‘ค๎…ž๐‘–โ„“=๐‘š๎“๐‘–=โ„“โˆ’๐‘โˆ’๐‘ข๐‘ฆ๐‘–๐‘ค๎…ž๐‘–โ„“=๐‘š๎“๐‘–=1๐‘ฆ๐‘–๐‘ค๎…ž๐‘–โ„“=โˆถ๐‘ค(๐‘ฆ,โ„“).(17)

Now, we are ready to prove Lemmas 1 and 2.

Proof of Lemma 1. First, let us show that every feasible solution to IKHO is a feasible solution to one of the subproblems Sub[๐พ,0,๐‘ ] for an ๐‘ โˆˆ๐’ฎ0. Clearly, for each ๐‘ฅโˆˆ๐’ณ[๐ด], taking ๐‘ โˆถ=(๐‘ฅ1,๐‘ฅ2,โ€ฆ,๐‘ฅ๐ฟ), we get that ๐‘ฅโˆˆ๐’ณ[๐พ,0,๐‘ ]. By exploiting Observation 1 with ๐‘Ž=๐ฟ, we get that ๐‘ฆโˆถ=๐‘ โ‹…๐ŸŽ๐‘šโˆ’๐ฟ obeys the capacity constraint on knapsacks [1,๐‘+๐‘ข], and then ๐‘ โˆˆ๐’ฎ0. To prove the opposite inclusion, take ๐‘ โˆˆ๐’ฎ0 and ๐‘ฅโˆˆ๐’ณ[๐พ,0,๐‘ ]. We show that ๐‘ฅโˆˆ๐’ณ[๐ด]. Constraint (4), Constraint (5), and the capacity constraint on knapsacks [๐‘+๐‘ข+1,๐‘š] are clearly satisfied. Since ๐‘ โˆˆ๐’ฎ0, Observation 1 let us verify the capacity constraint on knapsacks [1,๐‘+๐‘ข].

Lemma 2 directly follows from the two opposite inclusions, that we show separately. While reading these proofs, Figure 3 can be useful to visualize the structure of the vectors ๐‘ฅ,๐‘ฆ involved in the proofs.

859820.fig.003
Figure 3: Building a feasible solution to Sub[๐‘˜,โ„Ž,๐‘ ]. In order to construct a feasible solution ๐‘ฅ to the current subproblem Sub[๐‘˜,โ„Ž,๐‘ ], we exploit a feasible solution ๐‘ฆ to one of the subsequent subproblems Sub[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…žโˆถ=(๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘)], for a ๐‘โˆˆ{0,1}. Clearly, ๐‘ฅ must satisfy the capacity constraint on the knapsacks [โ„Ž+๐‘+๐‘ข+1,๐‘š]. Moreover, notice that the radiation from the bit ๐‘ 1 does not reach the knapsacks after โ„Ž+๐‘+๐‘ข+1, and then, for those knapsacks, ๐‘ฅ satisfies the capacity constraint only if ๐‘ฆ does. Therefore, we need to check the capacity constraint only for the knapsack โ„Ž+๐‘+๐‘ข+1. Furthermore, the bits after ๐‘ are not involved when we check the capacity constraint on that knapsack. Thus, in order to check the capacity constraint on the knapsack โ„Ž+๐‘+๐‘ข+1, it is enough to know the bits of the string ๐‘ โ‹…๐‘. Notice that, for a signature ๐‘ , ๐ฟ is the smallest width that holds the properties we showed above.

Proof of Lemma 2. First, we show that ๐’ณ[๐‘˜,โ„Ž,๐‘ ]โŠ†๎š๐‘โˆˆ{0,1}โˆถ๐‘ is(โ„Ž,๐‘)-good๐’ณ๎€บ๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…žโˆถ=๎€ท๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘๎€ธ๎€ป+๎€ท๐‘ 1,โ„Ž+1๎€ธ.(18)
Proof. Suppose ๐‘ฅโˆˆ๐’ณ[๐‘˜,โ„Ž,๐‘ ]. Let ๐‘โˆถ=๐‘ฅโ„Ž+๐ฟ+1. The inclusion follows by two facts: (a)๐‘  is (โ„Ž,๐‘)-good; (b)๐‘ฅโˆˆ๐’ณ[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…ž]+(๐‘ 1,โ„Ž+1). Take ๐‘งโˆถ=๐ŸŽโ„Žโ‹…๐‘ โ‹…๐‘โ‹…๐ŸŽ๐‘šโˆ’โ„Žโˆ’๐ฟโˆ’1. Since ๐‘ฅ obeys the capacity constraint on knapsacks [โ„Ž+๐‘+๐‘ข+1,๐‘š] and ๐‘ฅ๐‘–=๐‘ง๐‘– for ๐‘–=1,โ€ฆ,โ„Ž+๐ฟ+1, by exploiting Observation 1 with ๐‘Ž=โ„Ž+๐ฟ+1, we get that ๐‘ง satisfies the capacity constraint on the knapsack โ„Ž+๐‘+๐‘ข+1. Clearly, ๐‘ ๎…žโˆถ=(๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘) satisfies Constraint (5), being a substring of ๐‘ฅ. Hence, ๐‘  is (โ„Ž,๐‘)-good.
In order to show that ๐‘ฅโˆˆ๐’ณ[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…ž]+(๐‘ 1,โ„Ž+1), we take ๐‘ฆโˆถ=๐‘ฅ+(0,โ„Ž+1) and we show that ๐‘ฆโˆˆ๐’ณ[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…ž]. By following the subproblems definition, it is simple to verify that ๐‘ฆ obeys Constraint (10), (12), and (5) of Sub[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…ž]. Moreover, since ๐‘ฅ satisfies the capacity constraint for each โ„“โˆˆ[โ„Ž+๐‘+๐‘ข+1,๐‘š], by applying Observation 2 with ๐‘Ž=โ„Ž+2, we get that ๐‘ฆ satisfies the capacity constraint over the knapsacks [โ„Ž+๐‘+๐‘ข+2,๐‘š], and then ๐‘ฆ obeys Constraint (11) too.

Second, we prove that ๎š๐‘โˆˆ{0,1}โˆถ๐‘ is(โ„Ž,๐‘)-good๐’ณ๎€บ๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…žโˆถ=๎€ท๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘๎€ธ๎€ป+๎€ท๐‘ 1,โ„Ž+1๎€ธโŠ†๐’ณ[๐‘˜,โ„Ž,๐‘ ].(19)

Proof. Take a ๐‘โˆˆ{0,1} such that ๐‘  is (โ„Ž,๐‘)-good and a ๐‘ฆโˆˆ๐’ณ[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ โ€ฒโˆถ=๐‘ 2,๐‘ 3,โ€ฆ,๐‘ ๐ฟ,๐‘]. We will show that ๐‘ฅโˆถ=๐‘ฆ+(๐‘ 1,โ„Ž+1)โˆˆ๐’ณ[๐‘˜,โ„Ž,๐‘ ]. Constraint (10) and (12) of Sub[๐‘˜,โ„Ž,๐‘ ] easily follow from subproblems definition. Moreover, since ๐‘ฆ satisfies Constraint (5) and ๐‘ โˆˆ๐’ฎ, then ๐‘ฅ satisfies Constraint (5).
It remains to verify Constraint (11). Since ๐‘  is (โ„Ž,๐‘)-good, then ๐‘งโˆถ=๐ŸŽโ„Žโ‹…๐‘ โ‹…๐‘โ‹…๐ŸŽ๐‘šโˆ’โ„Žโˆ’๐ฟโˆ’1 obeys the capacity constraint on the knapsack โ„Ž+๐‘+๐‘ข+1. By applying Observation 1 with ๐‘Ž=โ„Ž+๐ฟ+1, we derive that also ๐‘ฅ obeys the capacity constraint on that knapsack. Moreover, since ๐‘ฆโˆˆ๐’ณ[๐‘˜โˆ’๐‘ 1,โ„Ž+1,๐‘ ๎…ž], then ๐‘ฆ obeys the capacity constraint on the knapsacks [โ„Ž+๐‘+๐‘ข+2,๐‘š]. Since ๐‘ฆ๐‘–=๐‘ฅ๐‘– for ๐‘–=(โ„Ž+2,โ€ฆ,๐‘š), by Observation 2,โ€‰โ€‰๐‘ฅ obeys the capacity constraint also for the knapsacks [โ„Ž+๐‘+๐‘ข+2,๐‘š].

3.3. The Base of the Recursion

We have two base cases. Observation 3 handles the case when โ„Ž=๐‘šโˆ’๐ฟ, while Observation 4 treats the case when ๐‘˜=0.

Observation 3. Consider โ„Ž=๐‘šโˆ’๐ฟ, for all ๐‘˜=1,โ€ฆ,๐พ, and ๐‘ โˆˆ๐’ฎ. Moreover, let ๐‘งโˆถ=๐ŸŽโ„Žโ‹…๐‘ . If |๐‘ง|โ‰ค๐‘˜and ๐‘ค(๐‘ง,โ„“)โ‰ค๐‘โ„“ for each โ„“โˆˆ[๐‘šโˆ’๐‘ข+1,๐‘š],then opt[๐‘˜,โ„Ž,๐‘ ]=โˆ‘๐‘š๐‘–=1๐‘ง๐‘–๐‘๎…ž๐‘–. Otherwise, opt[๐‘˜,โ„Ž,๐‘ ]=โˆ’โˆž.

Proof. Clearly, since โ„Ž+๐ฟ=๐‘š and by Constraint (10), there cannot exist a solution to Sub[๐‘˜,โ„Ž,๐‘ ] different from ๐‘ง. Moreover, notice that โ„Ž+๐‘+๐‘ข+1=๐‘šโˆ’๐‘ข+1.

Observation 4. Consider ๐‘˜=0, for all โ„Ž=0,โ€ฆ,๐‘šโˆ’๐ฟ, and ๐‘ โˆˆ๐’ฎ. If ๐‘ =๐ŸŽ๐ฟ, then opt[๐‘˜,โ„Ž,๐‘ ]=0,Otherwise opt[๐‘˜,โ„Ž,๐‘ ]=โˆ’โˆž.

Proof. Clearly, by Constraint (12), ๐ŸŽ๐‘š can be the only solution to Sub[0,โ„Ž,๐‘ ].

4. Complexity

In this section, we prove Lemma 3.

Lemma 3. Let ๐‘Ÿ be a constant such that ๐ฟ/๐‘ is bounded by ๐‘Ÿ when ๐‘>0, and ๐ฟ is bounded by ๐‘Ÿ when ๐‘=0. The above algorithm takes time and space of ๐‘‚(๐‘๐‘Ÿร—๐‘šร—๐พ).

Clearly, our algorithm exploits a three-dimensional matrix for storing the values opt[๐‘˜,โ„Ž,๐‘ ], for ๐‘˜=1,โ€ฆ,๐พ, โ„Ž=0,โ€ฆ,๐‘šโˆ’๐ฟ, and ๐‘ โˆˆ๐’ฎ. We also need a matrix of the same size which traces, for each subproblem Sub[๐‘˜,โ„Ž,๐‘ ], the subsequent subproblem used to compute opt[๐‘˜,โ„Ž,๐‘ ]. This makes us rebuild the optimum solution at the end. The space complexity of the algorithm then is ๐‘‚(๐พร—๐‘šร—|๐’ฎ|). We need to rate the value of |๐’ฎ|, but first let us to compute the time complexity.

In order to evaluate the base case of our dynamic programming algorithm, we first refer to Observation 3. Clearly, for ๐‘˜<|๐‘ |, opt[๐‘˜,โ„Ž,๐‘ ]=โˆ’โˆž, because there must be at least |๐‘ | insertion in a solution that starts with the signature ๐‘ . Moreover, since ๐‘งโˆถ=๐ŸŽโ„Žโ‹…๐‘  is the only feasible solution to Sub[๐‘˜,โ„Ž,๐‘ ], it is clear that, for each ๐‘˜>|๐‘ |, opt[๐‘˜,โ„Ž,๐‘ ]=opt[|๐‘ |,โ„Ž,๐‘ ], by Constraint (12). Therefore, we have to compute only for ๐‘˜=|๐‘ |, and then, the number of base case subproblems to be computed is only |๐’ฎ|. Since ๐‘ โˆˆ{0,1}๐ฟ, ๐ฟ is the time needed for computing both the profit of ๐‘ง and ๐‘ค(๐‘ง,โ„“), for an โ„“โˆˆ[๐‘šโˆ’๐‘ข+1,๐‘š]. Then, to check the capacity constraint on all the knapsacks [๐‘šโˆ’๐‘ข+1,๐‘š], we need ๐‘ขร—๐ฟ computations. Thus, the base case โ„Ž=๐‘šโˆ’๐ฟ can be computed in time of ๐‘‚(|๐’ฎ|ร—๐ฟร—๐‘ข). About the base case ๐‘˜=0, as handled by Observation 4, note that if we codify the signatures ๐‘ โˆˆ๐’ฎ (a such codify is given in Section 4.2), we can check the condition ๐‘ =๐ŸŽ๐ฟ in ๐‘‚(1). Regarding the general case, by (14), for solving a subproblem, we have to check if ๐‘  is (โ„Ž,๐‘)-good, for ๐‘=0,1. To check if ๐‘งโˆถ=๐ŸŽโ„Žโ‹…๐‘ โ‹…๐‘โ‹…๐ŸŽ๐‘šโˆ’โ„Žโˆ’๐ฟโˆ’1 satisfies the capacity constraint on the knapsack โ„Ž+๐‘+๐‘ข+1, and if ๐‘ ๎…žโˆˆ๐’ฎ, we spend ๐‘‚(๐ฟ) computations. Since the number of subproblems is ๐พร—(๐‘šโˆ’๐ฟ)ร—|๐’ฎ|, we need ๐‘‚(|๐’ฎ|ร—๐‘šร—๐ฟร—๐พ) time to fill the matrix opt. Moreover, by (13), we have to scan over the ๐‘ โˆˆ๐’ฎ0 in order to find the maximum value of opt[๐พ,0,๐‘ ]. Clearly, |๐’ฎ0|โ‰ค|๐’ฎ|, but we need ๐ฟร—(๐‘+๐‘ข) computations to check the capacity constraints on all knapsacks โ„“โˆˆ[1,๐‘+๐‘ข], because each signature ๐‘  has width ๐ฟ. Thus, we spend ๐‘‚(|๐’ฎ|ร—๐ฟ2) computations, to find opt[๐ด]. Furthermore, rebuilding the best solution takes ๐‘‚(๐‘š) time. Therefore, the part in which we recursively compute the subproblems leads the complexity of the entire algorithm. It depends on the value of |๐’ฎ|, as well as the space complexity. In Section 4.1, we give an estimate of this value. Moreover, in Section 4.2, we show an ordering of the set ๐’ฎ, that permits us to check the capacity constraint on a knapsack in constant time.

4.1. Estimating |๐’ฎ|

In the case where ๐‘ is a constant, we can directly estimate |๐’ฎ|.

Observation 5. If ๐‘ is a constant, then |๐’ฎ|=๐‘‚(1).

Proof. Clearly, |๐’ฎ|โ‰ค2๐ฟ, because the number of binary strings ๐‘ โˆˆ{0,1}๐ฟ is 2๐ฟ. Moreover, when ๐‘=0, we supposed ๐ฟ constant. When ๐‘>0, since ๐ฟ/๐‘โ‰ค๐‘Ÿ, we get that ๐ฟโ‰ค๐‘Ÿ๐‘, with constant ๐‘Ÿ and ๐‘.

For nonconstant values of ๐‘, let us find a general form for |๐’ฎ|. Let ๐‘†๐‘(๐‘›) denote the number of binary strings ๐‘ โˆˆ{0,1}๐‘› such that ๐‘  obeys Constraint (5). Note that Constraint (5) contains the parameter ๐‘. When ๐‘›โˆˆ[0,๐‘], we have ๐‘› places where to insert, and at most one insertion is possible by Constraint (5). Moreover, we have to count the string with no insertions. Thus, we get that ๐‘†๐‘(๐‘›)=๐‘›+1 for all ๐‘›โˆˆ[0,๐‘]. For greater values of ๐‘›, we refer to the recursion shown in Figure 4. If the first bit of ๐‘  is 0, the choice of the following bits is not influenced. Therefore, it is enough to find the number of strings ๐‘ โˆˆ{0,1}๐‘›โˆ’1 such that ๐‘  obeys Constraint (5), that is exactly ๐‘†๐‘(๐‘›โˆ’1). If the first bit is 1, by Constraint (5), the following ๐‘ bits are necessarily 0โ€™s. In this case, we continue to choose after the ๐‘+1-th bit. Thus, we have ๐‘†๐‘(๐‘›โˆ’๐‘โˆ’1) ways to choose the remaining bits. Therefore, we can express ๐‘†๐‘(๐‘›) by the recurrence equation: ๐‘†๐‘(๐‘›)=๐‘†๐‘(๐‘›โˆ’1)+๐‘†๐‘(๐‘›โˆ’๐‘โˆ’1),๐‘†๐‘(๐‘›)=๐‘›+1โˆ€๐‘›โˆˆ[0,๐‘].(20)

859820.fig.004
Figure 4: The influence of the first bit choice.

Lemma 4 gives a general estimate of the recurrence ๐‘†๐‘(๐‘›), in order to bound |๐’ฎ| for nonconstant values of ๐‘.

Lemma 4. Let ๐‘โ‰ฅ๐‘’. For each ๐‘›โ‰ฅ๐‘, ๐‘†๐‘(๐‘›)โ‰ค((๐‘+1)/๐‘)๐‘๐‘›/๐‘.

Proof. We prove the claim by induction on ๐‘›, and we postpone to the appendix the proof of the base of the induction, that is, the case ๐‘โ‰ค๐‘›โ‰ค2๐‘. For ๐‘›>2๐‘, we have the step of induction. Clearly, ๐‘†๐‘(๐‘›)=๐‘†๐‘(๐‘›โˆ’1)+๐‘†๐‘(๐‘›โˆ’๐‘โˆ’1)โ‰ค๐‘+1๐‘๐‘(๐‘›โˆ’1)/๐‘+๐‘+1๐‘๐‘(๐‘›โˆ’๐‘โˆ’1)/๐‘=๐‘+1๐‘๎€ท๐‘(๐‘›โˆ’1)/๐‘+๐‘(๐‘›โˆ’๐‘โˆ’1)/๐‘๎€ธ=๐‘+1๐‘๐‘(๐‘›โˆ’๐‘โˆ’1)/๐‘๎€ท1+๐‘๐‘/๐‘๎€ธ=๐‘+1๐‘๐‘(๐‘›โˆ’๐‘โˆ’1)/๐‘(1+๐‘)=๐‘+1๐‘๐‘๐‘›/๐‘(1+๐‘)๐‘(๐‘+1)/๐‘.(21) Hence, it is sufficient to show that (1+๐‘)/๐‘(๐‘+1)/๐‘โ‰ค1 or equivalently 1+๐‘โ‰ค๐‘(๐‘+1)/๐‘. Since ๐‘(๐‘+1)/๐‘=๐‘โ‹…๐‘1/๐‘, it remains to show that (1+1/๐‘)โ‰ค๐‘1/๐‘.
We know that ๐‘’๐‘ฅโ‰ฅ๐‘ฅ+1 for each real ๐‘ฅ. By substituting ๐‘ฅ with 1/๐‘ and noticing that ๐‘โ‰ฅ๐‘’, we get that (1+1/๐‘)โ‰ค๐‘1/๐‘.

Since ๐ฟ=๐‘+2๐‘ขโ‰ฅ๐‘, we can apply Lemma 4 to deduce that |๐’ฎ|=๐‘†๐‘(๐ฟ)=๐‘‚(๐‘๐ฟ/๐‘), as (๐‘+1)/๐‘โ‰ค2 for ๐‘>0. Therefore, when ๐ฟ/๐‘ is bounded by a constant ๐‘Ÿ, we get |๐’ฎ|=๐‘‚(๐‘๐‘Ÿ).

4.2. Ranking the Set ๐’ฎ

We use Recurrence (20) to define a function posโˆถ๐’ฎโ†’[0,|๐’ฎ|โˆ’1], which provides a unique index for each signature, and hence, it gives a ranking for the set ๐’ฎ.

Definition 5. For each ๐‘ โˆˆ๐’ฎ, pos(๐‘ )โˆถ=๐ฟ๎“๐‘–=1๐‘ ๐‘–โ‹…๐‘†๐‘(๐ฟโˆ’๐‘–).(22)

Note that ๐‘†๐‘(๐ฟโˆ’๐‘–) is the number of signatures having length ๐ฟโˆ’๐‘–, and it is equivalent to the number of signatures of length ๐ฟโˆ’๐‘–+1 that start with a 0. Hence, as illustrated in Figure 5, in the step of the sum where ๐‘–=1, we intend to place the signatures with ๐‘ 1=1 after all the signatures with ๐‘ 1=0, that are exactly ๐‘†๐‘(๐ฟโˆ’1). For ๐‘–=2,โ€ฆ,๐ฟ, we do recursively the same, locating substrings of length ๐ฟโˆ’๐‘–.

859820.fig.005
Figure 5: Ordering the signatures.

Conversely, given an integer ๐‘โˆˆ[0,|๐‘†|โˆ’1], the unranking procedure is the following. Take ๐‘ โˆถ=๐ŸŽ๐ฟ. For ๐‘–=1,โ€ฆ,๐ฟ, if ๐‘โ‰ฅ๐‘†๐‘(๐ฟโˆ’๐‘–), set ๐‘ ๐‘–=1 and ๐‘โˆถ=๐‘โˆ’๐‘†๐‘(๐ฟโˆ’๐‘–).

Evidently, in order to efficiently perform such ordering on the set ๐’ฎ, we need to compute and store the recurrence ๐‘†๐‘(๐‘›), for ๐‘›=1,โ€ฆ,๐ฟโˆ’1, at the beginning of the algorithm. This takes ๐‘‚(๐ฟ) time and space, whereas the ranking and unranking operations take ๐‘‚(๐ฟ) time.

Indeed, we can avoid to encode and decode the signatures for the computation of each subproblem. This can be done by initializing a table at the beginning of the algorithm, that stores, for each position pos(๐‘ ๎…ž) relative to a signature ๐‘ ๎…ž, a list of the bits that are changed from the previous signature ๐‘ , that is, the signature having pos(๐‘ )=pos(๐‘ ๎…ž)โˆ’1. It is easy to verify the following procedure finds the next signature from the previous one (it works similarly to the function that increments a binary counter, but considering Constraint (5)). (i)Scan the previous string ๐‘  starting from the least significant bit (right most) and find the first range of ๐‘+1 consecutive 0โ€™s, or a range of consecutive 0โ€™s that includes the most significant bit (left most). (ii)If such a range exists, the next string is obtained by setting to 1 the right most bit of the range, and by setting to 0 all the bits at the right of the range. (iii)If such a range does not exist, ๐‘  is the last signature.

Above all, observe that, by Constraint (5), there are at most ๐ฟ/(๐‘+1) insertions in a signature ๐‘ โˆˆ|๐’ฎ|, and we know that ๐ฟ/(๐‘+1)โ‰ค๐ฟ/๐‘โ‰ค๐‘Ÿ. Therefore, for every kind of ranking for the set ๐’ฎ, the number of the bits changing between two adjacent signatures is ๐‘‚(๐‘Ÿ). Thus, if we know the changing bits from a signature ๐‘  to the next ๐‘ ๎…ž, we can use an incremental approach for computing the value ๐‘ค(๐‘ ๎…ž,โ„“), from ๐‘ค(๐‘ ,โ„“), in constant time. This allows us to check in constant time the capacity constraints of the (โ„Ž,๐‘)-goodness (14), those regarding the definition of the set ๐’ฎ0 (13), and when computing the base case (Observation 3). Note that also Constraint (5) of the (โ„Ž,๐‘)-goodness can be computed in ๐‘‚(๐‘Ÿ) with the same technique.

Moreover, note that when computing the matrix opt, if we place the cycle on the variable ๐‘ , externally to the cycle on the variable ๐‘˜, we can clearly find the next signature and check the capacity constraints only once every ๐พ subproblems. In this way, the cost of finding the next signature is made inessential.

Thus, we can conclude that our algorithm has time and space complexity of ๐‘‚(๐‘๐‘Ÿร—๐‘šร—๐พ).

5. Memory Saving Version

Consider (14). Given an โ„Žโˆˆ[1,๐‘šโˆ’๐ฟโˆ’1], in order to compute opt[๐‘˜,โ„Ž,๐‘ ] for each ๐‘ โˆˆ๐’ฎ and ๐‘˜โ‰ค๐พ, we need only the elements having โ„Ž=โ„Ž+1 of the matrix opt. Moreover, by (13), only the elements with โ„Ž=0 are required for computing opt[๐ด]. Thus, in order to work out the profit of a best solution, we only need ๐‘‚(|๐’ฎ|ร—๐พ) space. Unfortunately, this simplification does not apply to the matrix used to compute the best solution.

In [6], Hirschberg showed an elegant and practical space reduction method for the longest common subsequence problem, which works for many dynamic programming algorithms (well exposed also in [7]). In general, this method allows to compute an optimal solution, taking as much space and time as if we had only to compute the optimal solution value. This is accomplished by exploiting the equation which handles the recursion in the original algorithm (in our case (14)). Its space policy exploits the space improvement mentioned at the beginning of this section.

Conceptually, the basic idea of the method is to halve a dimension of the dynamic programming matrix and find how the best solution is divided in the other dimensions. This permits the two halves obtained to be solved separately and recursively in the same way. In order to apply this method to our algorithm, we follow the next steps. (i)We halve the knapsack array. (ii)We find how many insertions of the best solution are placed in each half of the knapsack array. (iii)We locate a number of insertions placed around the middle of the knapsack array. This allows us to break up the IKHO problem in two independent subproblems, which are then solved recursively.

Notice that, in the last sentence, the word subproblems does not refer to the subproblems Sub[๐‘˜,โ„Ž,๐‘ ] defined in Section 3.

In Section 5.1, we implement this idea. In Section 5.2, we show that the new defined algorithm decreases the space complexity to ๐‘‚(|๐’ฎ|ร—๐พ), without increasing the time complexity.

5.1. The Algorithm

In the following, we write ๐‘ฅโŠข๐‘…(๐‘ ,โ„Ž) when ๐‘ฅโ„Žโˆ’๐ฟโˆ’1+๐‘—=๐‘ ๐‘— for each ๐‘—โˆˆ[1,๐ฟ] and ๐‘ฅ๐‘–=0 for each ๐‘–โˆˆ[โ„Ž,๐‘š], that is, when ๐‘ฅโ€‰โ€‰ends with the signature ๐‘  in the knapsacks [โ„Žโˆ’๐ฟ,โ„Žโˆ’1].

Given a natural ๐‘˜โ‰ค๐พ, a natural โ„Žโˆˆ[๐ฟ+1,๐‘š+1], and a signature ๐‘ , we consider a modified problem Sub๐‘…[๐‘˜,โ„Ž,๐‘ ] whose solutions are those ๐‘ฅโˆˆ{0,1}๐‘š which obey ๐‘ฅ0๐‘ฅ00090โŠข๐‘…(๐‘ ,โ„Ž),๐‘ค(๐‘ฅ,โ„“)โ‰ค๐‘โ„“โˆ€โ„“โˆˆ[1,โ„Žโˆ’๐‘ขโˆ’1],|๐‘ฅ|โ‰ค๐‘˜,and(5).(23) The subproblems Sub๐‘…[๐‘˜,โ„Ž,๐‘ ] are simply the symmetrical transposition of the subproblems Sub[๐‘˜,โ„Ž,๐‘ ]. The only significant difference is that we check the capacity constraints in the range [1,โ„Žโˆ’๐‘ขโˆ’1], that is, not symmetrical to the range [โ„Ž+๐‘+๐‘ข+1,๐‘š]. This dissimilarity is caused by the fact that also the radiations produced by an insertion are not symmetrical too (see (9)).

We denote by ๐’ณ๐‘…[๐‘˜,โ„Ž,๐‘ ] the space of the feasible solutions to the modified problem Sub๐‘…[๐‘˜,โ„Ž,๐‘ ]. Moreover, opt๐‘…[๐‘˜,โ„Ž,๐‘ ] is the maximum profit of a solution in ๐’ณ๐‘…[๐‘˜,โ„Ž,๐‘ ] when ๐’ณ๐‘…[๐‘˜,โ„Ž,๐‘ ] is not empty, and opt๐‘…[๐‘˜,โ„Ž,๐‘ ]=โˆ’โˆž when ๐’ณ๐‘…[๐‘˜,โ„Ž,๐‘ ] is empty. Notice that, since the subproblems Sub and Sub๐‘… are symmetrical, the properties proved for opt hold symmetrically for opt๐‘…, bringing some adjustments due to the fact that the radiations are not exactly symmetrical. Thus, for computing the matrix opt๐‘…, we take the same space and time needed for computing the matrix opt, that is, ๐‘‚(|๐’ฎ|ร—๐พ) space and ๐‘‚(๐‘šร—|๐’ฎ|ร—๐พ) time.

In the following, let |๐‘ | to be the number of insertions in a signature ๐‘ . For each ๐‘ โˆˆ๐’ฎ, and for โ„Ž=0,โ€ฆ,๐‘šโˆ’๐ฟ, let ๐‘(๐‘ ,โ„Ž)โˆถ=โˆ‘๐ฟ๐‘–=1๐‘ ๐‘–๐‘๎…žโ„Ž+๐‘–. Clearly, this function holds for each knapsack โ„Ž the profit caused by placing the signature ๐‘  in the knapsacks [โ„Ž+1,โ€ฆ,โ„Ž+๐ฟ]. Moreover, for ๐›ผ,๐›ฝ=1,โ€ฆ,๐‘š, and ๐‘ฆโˆˆ{0,1}๐‘š, let ๐‘ฆ(๐›ผ,๐›ฝ) be the substring of ๐‘ฆ composed by the elements in the range [๐›ผ,๐›ฝ] (we assume ๐‘ฆ(๐›ผ,๐›ฝ) is empty when ๐›ผ<๐›ฝ). Furthermore, for ๐‘˜,๐‘˜๎…žโ‰ค๐พ, for each signature ๐‘ , for โ„Ž=0,โ€ฆ,๐‘šโˆ’๐ฟ and โ„Ž๎…ž=๐ฟ+1,โ€ฆ,๐‘šโˆ’๐ฟ such that โ„Ž๎…žโˆ’โ„Ž=๐ฟ+1, let ๐’ณ๐‘…[๐‘˜๎…ž,โ„Ž๎…ž,๐‘ ]โŠ—๐’ณ[๐‘˜,โ„Ž,๐‘ ]โˆถ={๐‘ฆ๎…ž(1,โ„Ž)โ‹…๐‘ โ‹…๐‘ฆ(โ„Ž๎…ž,๐‘š)โˆถ๐‘ฆโ€ฒโˆˆ๐’ณ๐‘…[๐‘˜๎…ž,โ„Ž๎…ž,๐‘ ],๐‘ฆโˆˆ๐’ณ[๐‘˜,โ„Ž,๐‘ ]}. This new operator defines a new space of solutions given by the concatenation of the feasible solutions of two symmetrical subproblems. Notice that the signature ๐‘  represents the joining point when concatenating the two strings. This situation is represented in Figure 6, which is also useful to visualize the proof of Lemma 6, which represents the main innovation on our algorithm.

859820.fig.006
Figure 6: Joining ๐’ณ๐‘…[๐‘˜๎…ž,โ„Ž๎…ž,๐‘ ] with ๐’ณ[๐‘˜,โ„Ž,๐‘ ]. The solution ๐‘ฆ belongs to the space ๐’ณ[๐‘˜,โ„Ž,๐‘ ], while ๐‘ฆ๎…žโˆˆ๐’ณ๐‘…[๐‘˜๎…ž,โ„Ž๎…ž,๐‘ ]. Note that the knapsacks [โ„Ž+๐‘+๐‘ข+1,๐‘š], where ๐‘ฆ satisfies the capacity constraint, are complementary to the knapsacks [1,โ„Ž๎…žโˆ’๐‘ขโˆ’1], where ๐‘ฆ๎…ž satisfies that constraint.

Lemma 6. Let โ„Ž=๐‘š/2โˆ’๐ฟ/2โˆ’1, and โ„Ž๎…ž=โ„Ž+๐ฟ+1.
Then, opt[๐ด]=max๐‘ โˆˆ๐’ฎ,|๐‘ |โ‰ค๐‘˜โ‰ค๐พ๎€ทopt๐‘…๎€บ๐พโˆ’๐‘˜+|๐‘ |,โ„Ž๎…ž,๐‘ ๎€ป+opt[๐‘˜,โ„Ž,๐‘ ]โˆ’๐‘(๐‘ ,โ„Ž)๎€ธ.(24) Indeed, ๐’ณ[๐ด]=๎š๐‘ โˆˆ๐’ฎ,|๐‘ |โ‰ค๐‘˜โ‰ค๐พ๎€ท๐’ณ๐‘…๎€บ๐พโˆ’๐‘˜+|๐‘ |,โ„Ž๎…ž,๐‘ ๎€ปโŠ—๐’ณ[๐‘˜,โ„Ž,๐‘ ]๎€ธ.(25)

Proof. In order to show that ๐’ณ[๐ด]โŠ†โ‹ƒ๐‘ โˆˆ๐’ฎ,|๐‘ |โ‰ค๐‘˜โ‰ค๐พ(๐’ณ๐‘…[๐พโˆ’๐‘˜+|๐‘ |,โ„Ž๎…ž,๐‘ ]โŠ—๐’ณ[๐‘˜,โ„Ž,๐‘ ]), take ๐‘ฅโˆˆ๐’ณ[๐ด]. Moreover, take ๐‘ โˆถ=๐‘ฅ(โ„Ž+1,โ„Ž๎…žโˆ’1), ๐‘ฆโˆถ=๐ŸŽโ„Žโ‹…๐‘ โ‹…๐‘ฅ(โ„Ž๎…ž,๐‘š), ๐‘ฆ๎…žโˆถ=๐‘ฅ(1,โ„Ž)โ‹…๐‘ โ‹…๐ŸŽ๐‘šโˆ’โ„Ž๎…ž+1, and ๐‘˜โˆถ=|๐‘ฆ|. In the following, we prove that ๐‘ฆ๎…žโˆˆ๐’ณ๐‘…[๐พโˆ’๐‘˜+|๐‘ |,โ„Ž๎…ž,๐‘ ] and ๐‘ฆโˆˆ๐’ณ[๐‘˜,โ„Ž,๐‘ ]. Obviously, ๐‘ฆโŠข(๐‘ ,โ„Ž) and ๐‘ฆ๎…žโŠข๐‘…(๐‘ ,โ„Ž๎…ž). For which concerning the number of insertions, it is clear that ๐‘ฆ satisfies Constraint (12) of Sub[๐‘˜,โ„Ž,๐‘ ], as ๐‘˜=|๐‘ฆ|. Moreover, note that |๐‘ฅ|โ‰ค๐พ, because ๐‘ฅโˆˆ๐’ณ[๐ด], and |๐‘ฆ๎…ž|=|๐‘ฅ|โˆ’|๐‘ฆ|+|๐‘ |. Thus, |๐‘ฆ๎…ž|โ‰ค๐พโˆ’๐‘˜+|๐‘ |, and then ๐‘ฆ๎…ž satisfies Constraint (12) of Sub[๐พ+|๐‘ |โˆ’๐‘˜,โ„Ž๎…ž,๐‘ ]. About the capacity constraints, in order to show that ๐‘ฆ satisfies them on the knapsacks [โ„Ž+๐‘+๐‘ข+1,๐‘š], we can apply Observation 2 with ๐‘Ž=โ„Ž+1. Conversely, applying Observation 1 with ๐‘Ž=โ„Ž๎…žโˆ’1, we obtain that ๐‘ฆ๎…ž satisfies the capacity constraint on the knapsacks [1,โ„Ž๎…žโˆ’๐‘ขโˆ’1].
To prove the converse inclusion, take ๐‘ โˆˆ๐’ฎ, ๐‘˜ such that |๐‘ |โ‰ค๐‘˜โ‰ค๐พ, ๐‘ฆ๎…žโˆˆ๐’ณ๐‘…[๐พโˆ’๐‘˜+|๐‘ |,โ„Ž๎…ž,๐‘ ], and ๐‘ฆโˆˆ๐’ณ[๐‘˜,โ„Ž,๐‘ ]. We prove that ๐‘ฅโˆถ=๐‘ฆ๎…ž(1,โ„Ž)โ‹…๐‘ โ‹…๐‘ฆ(โ„Ž๎…ž,๐‘ )โˆˆ๐’ณ[๐ด]. Clearly, since |๐‘ฆ๎…ž|โ‰ค๐พโˆ’๐‘˜+|๐‘ | and |๐‘ฆ|โ‰ค๐‘˜, we get that |๐‘ฅ|=|๐‘ฆ๎…ž|+|๐‘ฆ|โˆ’|๐‘ |โ‰ค๐พ, thus ๐‘ฅ satisfies Constraint (12). Moreover, since ๐ฟโ‰ฅ๐‘, it is simple to verify that ๐‘ฅ satisfies also Constraint (5). Furthermore, exploiting Observation 2 with ๐‘Ž=โ„Ž+1, and Observation 1 with ๐‘Ž=โ„Ž๎…žโˆ’1, we get that ๐‘ฅ satisfies the capacity constraint on all the knapsacks [1,๐‘š].

Let ๐‘ opt and ๐‘˜opt be the values of ๐‘  and ๐‘˜ that maximize (24). Moreover, let ๐‘ฅoptโˆˆ{0,1}๐‘š be a best solution for an IKHO instance ๐ด. Clearly, the signature ๐‘ opt represents a piece of ๐‘ฅopt, that is, ๐‘ opt=๐‘ฅopt(โ„Ž+1,โ„Ž๎…žโˆ’1). In addition, note that ๐‘˜opt determines a distribution of the best solution insertions, that is, |๐‘ฅopt(โ„Ž+1,๐‘š)|โ‰ค๐‘˜opt and |๐‘ฅopt(1,โ„Ž๎…žโˆ’1)|โ‰ค๐พโˆ’๐‘˜opt+|๐‘ opt|. If we do not consider the insertions given by the signature ๐‘ opt, we obtain |๐‘ฅopt(โ„Ž๎…ž,๐‘š)|โ‰ค๐‘˜optโˆ’|๐‘ opt| and |๐‘ฅopt(1,โ„Ž)|โ‰ค๐พโˆ’๐‘˜opt, as described in Figure 7.

859820.fig.007
Figure 7: The subdivision on the ๐‘˜ dimension. A possible best solution ๐‘ฅopt is drawn. The black spots represent the insertions. It is simple to see that the value ๐‘˜optโˆ’|๐‘ opt| is a bound for the number of insertions placed at the right of ๐‘ opt, while ๐พโˆ’๐‘˜opt bounds the number of insertions at the left of the signature.

Having fixed the ๐ฟ middle elements of ๐‘ฅopt has an important consequence. The radiations of weight starting from the insertions at the left of ๐‘ opt cannot interfere with the radiations coming from the insertions placed at the right of ๐‘ opt, as shown in Figure 8. In particular, by (9), the right insertions affect only the range [โ„Ž๎…žโˆ’๐‘ข,๐‘š], while the left insertions interest only the knapsacks [1,โ„Ž+๐‘+๐‘ข].

859820.fig.008
Figure 8: The subdivision on the ๐‘  dimension. The radiations of weight coming from the right knapsacks do not reach those starting from the left knapsacks.

Therefore, in order to compute the entire part of ๐‘ฅopt which stands at the right of ๐‘ opt, it is enough to know ๐‘ opt, because the checks on the capacity constraints are independent from the insertions at the left of ๐‘ opt. The same clearly holds also for finding the left piece of ๐‘ฅopt. Thus, we subdivided the main problem in two independent subproblems, as follows:(i)to find the best solution in the knapsacks [โ„Ž๎…ž,๐‘š], obeying the capacity constraints over the knapsacks [โ„Ž๎…žโˆ’๐‘ข,๐‘š], having at most ๐‘˜optโˆ’|๐‘ opt| insertions and knowing that ๐‘ฅopt(โ„Ž+1,โ„Ž๎…žโˆ’1)=๐‘ opt; (ii)to find the best solution in the knapsacks [1,โ„Ž], obeying the capacity constraints over the knapsacks [1,โ„Ž+๐‘+๐‘ข], having at most ๐พโˆ’๐‘˜opt insertions and knowing that ๐‘ฅopt(โ„Ž+1,โ„Ž๎…žโˆ’1)=๐‘ opt.

The above-obtained subproblems are solvable as an IKHO instance, by recursively applying (24) with some adjustments. The only significant difference between the main call and the recursive calls is that, in the second ones, when checking the capacity constraints, we have to consider the insertions given by the previously fixed ๐‘ opt. Indeed, we can simplify this task by progressively updating the vector of capacities ๐‘, by subtracting ๐‘ค(๐‘ opt,โ„“) for each โ„“โˆˆ[1,๐‘š]. For each step of recursion, the new vector ๐‘ will be passed to the input of the lower level subproblems. The solution of each subproblem fixes over ๐‘ฅopt a signature ๐‘ opt, so as to recursively cover ๐‘ฅopt with substrings of length ๐ฟ. This is visualized in Figure 9. Finally, note that the base cases for this recursion are the subproblems which involve a number of knapsacks lower than ๐ฟ. In order to compute the best solution for them, we simply list all the feasible solutions and compare the profits.

859820.fig.009
Figure 9: Covering ๐‘ฅopt recursively. An optimal solution ๐‘ฅopt is drawn. The black elements are determined by the first call of (24). The more gray knapsacks are determined by the two next levels of calls.
5.2. Complexity

As mentioned at the beginning of this section, in the first call (root node of the recursion tree), the computing of (24) takes ๐‘‚(|๐’ฎ|ร—๐พ) space. The recursive calls occupy geometrically less memory than the first call, as they deal with an halvened number of knapsacks (and with a lower value of ๐พ), whence the total memory consumption is of the same order as the memory consumption for the sole first call. Note that, for each step of recursion, once we found ๐‘˜opt and ๐‘ opt, we can deallocate the matrices opt and opt๐‘…. Besides these matrices, we need ๐‘‚(๐‘š) space for dynamically compose the best solution ๐‘ฅopt. Therefore, the new algorithm takes only ๐‘‚(๐‘š+[|๐’ฎ|ร—๐พ]) space.

Let us now analyze its time complexity. In the first call, for the computation of (24), the algorithm spends ๐‘‚(๐‘šร—|๐’ฎ|ร—๐พ) time to compute the matrices opt and opt๐‘…, and ๐‘‚(|๐’ฎ|ร—๐พ) time to pick out the values ๐‘ opt and ๐‘˜opt. It also spends ๐‘‚(๐ฟ2) time in order to update the vector ๐‘ by subtracting the radiations of weight given by ๐‘ opt. In fact, a signature has width ๐ฟ, and the range of influence of an insertion is ๐‘+2๐‘ข+1=๐ฟ+1. To give an estimate for the subproblems computation, we need to study how the two terms found, ๐‘‚(๐ฟ2) and ๐‘‚(๐‘šร—|๐’ฎ|ร—๐พ), propagate on the next levels of calls. Moreover, let us write them as ๐›ผโ‹…๐ฟ2 and ๐›ฝโ‹…(๐‘šร—|๐’ฎ|ร—๐พ), for some constants ๐›ผ,๐›ฝ.

First, note that the recursive calling scheme for (24) can be approximated with a binary tree of height โŒˆlog2(๐‘š)โŒ‰. Indeed, since the calls on an input of ๐ฟ knapsacks are treated as leaf cases, the height of the binary tree corresponds to the first integer ๐‘› such that ๐ฟโ‹…2๐‘›โ‰ฅ๐‘š, that is, โŒˆlog2(๐‘š/๐ฟ)โŒ‰. Moreover, it is simple to find that such a binary tree has ๐‘‚(๐‘š/๐ฟ) nodes. In fact, since for each level ๐‘–, we have at most 2๐‘– nodes, the total number of nodes is log2(๐‘š/๐ฟ)๎“๐‘–=02๐‘–=1โˆ’2log2(๐‘š/๐ฟ)+11โˆ’2=1โˆ’2โ‹…(๐‘š/๐ฟ)โˆ’1=2โ‹…๐‘š๐ฟโˆ’1.(26) Therefore, considering all the calls of (24), for the first term, we obtain ๐›ผ๐ฟ2โ‹…๐›พ(๐‘š/๐ฟ)=๐‘‚(๐‘šร—๐ฟ).

Note that, when we call the two children of a node, both the knapsack array dimension and the ๐พ dimension are subdivided. In particular, we call the children resolution on two half of the knapsacks array (๐‘š/2 and ๐‘š/2), and taking parameters ๐พ๎…ž and ๐พ๎…ž๎…ž such that ๐พ๎…ž+๐พ๎…ž๎…žโ‰ค๐พ. Thus, for each level ๐‘–, and each node ๐‘—, the second term of complexity is ๐›ฝร—(๐‘š/2๐‘–)ร—๐พ๐‘—ร—|๐’ฎ|, such that โˆ‘๐‘—๐พ๐‘—โ‰ค๐พ. Therefore, for the second term, the complexity for each entire level ๐‘–, is clearly ๐›ฝร—(๐‘š/2๐‘–)ร—๐พร—|๐’ฎ|. Moreover, by adding up each level, we get that the total complexity for the second term is log2(๐‘š/๐ฟ)๎“๐‘–=0๎‚€๐›ฝร—๐‘š2๐‘–ร—๐พร—||๐’ฎ||๎‚=๐›ฝ๐‘š๐พ||๐’ฎ||ร—log2(๐‘š/๐ฟ)๎“๐‘–=012๐‘–โ‰ค๐›ฝ๐‘š๐พ||๐’ฎ||ร—2=๐‘‚๎€ท๐‘š๐พ||๐’ฎ||๎€ธ.(27) Since |๐’ฎ|=๐‘†๐‘(๐ฟ)โ‰ฅ๐ฟ, the second term ๐‘‚(๐‘šร—|๐’ฎ|ร—๐พ) always bounds the first ๐‘‚(๐‘šร—๐ฟ). Therefore, the complexity of the nodes computation is ๐‘‚(๐‘šร—|๐’ฎ|ร—๐พ).

Regarding the computation of the leaf cases, note that we have to find the best feasible piece of solution, for an input of at most ๐ฟ knapsacks. Since these pieces have length lower than ๐ฟ, and they must obey Constraint (5), they are at most |๐’ฎ|. Moreover, we have to consider the cost of checking the capacity constraints on ๐‘‚(๐ฟ) knapsacks. Thus, a leaf case can be solved in ๐‘‚(|๐’ฎ|ร—๐ฟ) time. Since we have ๐‘‚(๐‘š/๐ฟ) leaf cases, the total cost of the leaf cases is ๐‘‚(๐‘šร—|๐’ฎ|), that is not higher than the complexity needed for computing the nodes. Therefore, the memory-saving version of the algorithm has time complexity of ๐‘‚(๐‘šร—|๐’ฎ|ร—๐พ), which is the same as for the base version.

6. Conclusions

In [3], Aho exhibited a few classes of polynomial instances for the IKHO problem, motivated by the fact that IKHO is NP-complete [1] and APX-hard [2]. The most important class of instances identified by Aho is represented by the instances where ๐‘+2๐‘ข+1=๐‘‚(log(๐‘š๐›ผ)), for a constant ๐›ผ. Throughout Sections 3 and 4, we identified a new and wide class of instances allowing for a polynomial time algorithm. We achieved this by showing how to build a dynamic programming algorithm which executes in time of ๐‘‚(๐‘๐‘Ÿร—๐‘šร—๐พ) and takes space of ๐‘‚(๐‘๐‘Ÿร—๐‘šร—๐พ) for the instances where (๐‘+2๐‘ข)/๐‘ is bounded by a constant ๐‘Ÿ. These results represent a significant improvement to the understanding of the IKHO problem and more in general for the Interactive Knapsacks problems to which IKHO belongs and that are extensively presented by Aho in [4]. Note that our results imply Aho's results for IKHO, as shown in Section 2.2. In Section 5, we also exploited Hirschberg's approach in order to create a memory saving version of our algorithm which decreases the space complexity to ๐‘‚(๐‘๐‘Ÿร—๐พ+๐‘š) without increasing the time complexity.

Extensive experimental evaluations have been performed on the c++ implementations of both algorithms, confirming the complexity estimates given in Sections 4 and 5. In addition, these experiments pointed out that, despite the heavy constants introduced in the time complexity bound of the memory saving version, the last version of the algorithm is often faster than the base version in the practice. This is mainly due to the fact that the higher memory usage of the base version brings the operative system to allocate data in the slowest memory devices like the RAM and the hard disk, instead of using the CPU cache.

Appendix

Proving the Base of Lemma 4

In this appendix, we prove the base of the induction of Lemma 4. We have to show that, assuming ๐‘โ‰ฅ๐‘’, ๐‘†๐‘(๐‘›)โ‰ค((๐‘+1)/๐‘)๐‘๐‘›/๐‘ for ๐‘›โˆˆ[๐‘,2๐‘].

In the following, let ๐น๐‘(๐‘–)=1+๐‘+๐‘–+๐‘–(๐‘–โˆ’1)2.(A.1) First, notice that ๐น๐‘(๐‘–)=๐‘†๐‘(๐‘+๐‘–) for ๐‘–โˆˆ[0,๐‘]. Indeed, for ๐‘–โˆˆ[0,๐‘], we can compute ๐‘†๐‘(๐‘+๐‘–) by simply counting the number of signatures of length ๐‘+๐‘–, and then verify that this number is exactly ๐น๐‘(๐‘–). By Constraint (5), in a signature of length 2๐‘, or smaller than 2๐‘, there can be at most two insertions. Thus, since ๐‘+๐‘–โ‰ค2๐‘, we can count the signatures of length ๐‘+๐‘–, by grouping them according to the number of insertions (no one, one, or two). Obviously, only the signature ๐ŸŽ๐‘+๐‘– has no insertions. Moreover, there are exactly ๐‘+๐‘– ways to place one insertion. For the case where we have two insertions, note that the first one can be only in the first ๐‘–โˆ’1 positions, because if we place it on the ๐‘–-th position, by Constraint (5), there cannot be later insertions in a signature of length ๐‘–+๐‘. Moreover, notice that if we fix the first insertion in position ๐‘—, then we have ๐‘–โˆ’๐‘— possible places where to put the second one, again by Constraint (5). Therefore, the number of signatures with two insertions is clearly ๐‘–โˆ’1๎“๐‘—=1๐‘–โˆ’๐‘—=๐‘–โˆ’1๎“๐‘—=1๐‘—=๐‘–(๐‘–โˆ’1)2.(A.2)

So ๐น๐‘(๐‘–)=๐‘†๐‘(๐‘+๐‘–) for ๐‘–โˆˆ[0,๐‘], as anticipated. Therefore, in order to prove the base of the induction of Lemma 4, it is enough to show that ๐น๐‘(๐‘–)โ‰ค๐‘“๐‘(๐‘–), with ๐‘“๐‘(๐‘–)=((๐‘+1)/๐‘)๐‘(๐‘+๐‘–)/๐‘=((๐‘+1)/๐‘)๐‘1+๐‘–/๐‘=(๐‘+1)๐‘๐‘–/๐‘.

Clearly, for ๐‘–=0, we get ๐น๐‘(0)=๐‘+1โ‰ค(๐‘+1)๐‘0/๐‘=๐‘“๐‘(0). Our plan is then to observe that ๐œ•๐น๐‘(๐‘–)/๐œ•๐‘–โ‰ค๐œ•๐‘“๐‘(๐‘–)/๐œ•๐‘– holds for every ๐‘–โ‰ฅ0, when ๐‘โ‰ฅ๐‘’. Indeed, ๐œ•๐น๐‘(๐‘–)/๐œ•๐‘–=๐‘–+1/2, whereas ๐œ•๐‘“๐‘(๐‘–)/๐œ•๐‘–=(๐‘+1)๐‘๐‘–/๐‘(ln(๐‘)/๐‘). When ๐‘–=0, we get ๐น๎…ž๐‘(0)=1/2, and since ๐‘โ‰ฅ๐‘’, ๐‘“๎…ž๐‘(0)=(๐‘+1)๐‘0ln(๐‘)๐‘=ln(๐‘)๐‘+1๐‘>1.(A.3) That is, ๐น๎…ž๐‘(0)โ‰ค๐‘“๎…ž๐‘(0). Moreover, it is simple to see that ๐น๎…ž๐‘(๐‘–) is linear, whereas ๐‘“๎…ž๐‘(๐‘–) is an exponential on the variable ๐‘–, and, therefore, ๐‘“๎…ž๐‘(๐‘–)โ‰ฅ๐น๎…ž๐‘(๐‘–) for every ๐‘–โ‰ฅ0, as anticipated.

References

  1. Isto Aho, โ€œInteractive Knapsacks,โ€ Fundamenta Informaticae, vol. 44, no. 1-2, pp. 1โ€“23, 2000. View at Google Scholar
  2. Isto Aho, โ€œOn the approximability of interactive knapsacks problems,โ€ in Proceedings of the 28th Annual Conference on Current Trends in Theory and Practice of Informatics (SOFSEM '01), vol. 2234 of Lecture Notes in Computer Science, pp. 152โ€“159, Piešt’any, Slovak Republic, November/December 2001. View at Publisher ยท View at Google Scholar
  3. Isto Aho, โ€œNew polynomial-time instances to various knapsack-type problems,โ€ Fundamenta Informaticae, vol. 53, no. 3-4, pp. 199โ€“228, 2002. View at Google Scholar ยท View at Scopus
  4. Isto Aho, Interactive Knapsacks: Theory and Application, A-2002-13, University of Tampere, 2002.
  5. E. Y.-H. Lin, โ€œA bibliographical survey on some well-known non-standard knapsack problems,โ€ INFOR, vol. 36, no. 4, pp. 274โ€“317, 1998. View at Google Scholar ยท View at Scopus
  6. D. S. Hirschberg, โ€œAlgorithms for the longest common subsequence problem,โ€ Journal of the ACM, vol. 24, no. 4, pp. 664โ€“675, 1977. View at Publisher ยท View at Google Scholar
  7. D. Gusfield, Algorithms on Strings, Trees, and Sequences, Cambridge University Press, Cambridge, UK, 1997.