Abstract

In the game of Betweenies, the player is dealt two cards out of a deck and bets on the probability that the third card to be dealt will have a numerical value in between the values of the first two cards. In this work, we present the exact rules of the two main versions of the game, and we study the optimal betting strategies. After discussing the shortcomings of the direct approach, we introduce an information-theoretic technique, Kelly's criterion, which basically maximizes the expected log-return of the bet: we offer an overview, discuss feasibility issues, and analyze the strategies it suggests. We also provide some gameplay simulations.

1. Introduction

The card game of “(In-)Betweenies” or “In-Between” [1] is also known under many alternative names including “Between the Sheets“ [2], “Red Dog” [3], “Acey Deucey” [4], and “Yablon.” It comes in many variants and is offered as a gaming option in several casinos and online casinos. Though the exact rules vary, the main concept is that the player is dealt two cards and bets on whether the value of a third card dealt about to be dealt will be between the values of the two previously dealt cards. In this work we calculate the probabilities associated with the game, namely, the probability that a given hand is dealt and the probability of winning given the hand dealt, and we suggest a betting strategy based on Kelly's criterion (KC).

KC is famous for suggesting an optimal betting strategy which, at the same time, eliminates the possibility of the gambler getting ruined [5, 6]. As we will throught see below, however, the set of rules of this particular game does not fall within the scope of the general case considered in [5], in view of the fact that the player needs to contribute an amount of money in the beginning of each round for the right to play in this round. This feature can actually lead to ruin even if this strategy is followed. From this point of view, Betweenies make a particularly interesting case study for KC.

2. The Rules

The game is played in rounds and with a standard deck of 52 cards. The cards from 2 to 10 are associated with their face values, while Jack, Queen, and King with the values 11, 12, and 13, respectively. Aces can be associated with either 1 or 14, subject to further rules stated below. In the beginning of the round, every player contributes a fixed amount of money (henceforth assumed to be equal to 1), known as the “ante,” to the pot in order to play. Subsequently, each player is dealt two cards, one at a time, face up. Assuming that the first card is an ace, the player must declare it “low” (of value 1) or “high” (of value 14). Assuming that the second card is an ace, it is automatically declared “high.” After the two cards are dealt to a player, the player bets an amount of money, ranging from 0 to the pot amount, that the third card dealt will be strictly between the two already dealt cards (which we designate as event ). If the player wins, the player receives the amount of money bet from the pot; otherwise, the player contributes the amount of money bet to the pot. If a player's wealth becomes zero, the player quits the game. If at any point during the round the pot becomes empty, the round ends and a new round begins.

There are several possible variations to the rules above. (i)Multiple decks of cards may be used. (ii)Aces may always carry their face value 1. (iii)Assuming that the three dealt cards are all equal, a payout of 11 : 1 may be paid to the player (i.e, the player gets the original bet back plus 11 times the bet). (iv)Assuming that the two dealt cards have consecutive values, the bet is returned to the player at no loss. (v)The bet payout may vary depending on how many cards apart the two dealt cards are: 5 : 1 to for one card apart, 4 : 1 for 2 cards apart, 2 : 1 for three cards apart, and 1 : 1 for 4 or more cards apart. (vi)The unit contribution to the pot may be nonapplicable.

We will refer to the game with all these options turned off and on as the party version and the casino version, respectively. The reason is that, in a party game between friends, ante contributions are necessary to get the game going, while a casino game is in general more gamblingoriented and bets are covered by the casino's funds.

Further variations are mentioned in the literature: for example, when the third card is equal to either of the first two, then the player not only loses the bet into the pot, but has to contribute an extra amount equal to the bet into the pot [1]. As the possibilities are practically endless, we will restrict our study to the two versions mentioned above.

3. Kelly's Criterion

Kelly's criterion (KC) [5, 7] is a reinterpretation of the concept of mutual information, which is the core subject of Information Theory, in the context of games of chance and betting. Simply put, assuming independent trials in a game of chance, it suggests a betting strategy, based on which a player can expect an exponential increase of his wealth. The rate of this increase is, more precisely, equal to the information gain between the two underlying probability distributions of the game: true outcome probabilities and projected outcome probabilities, as suggested by the advertised odds. We demonstrate it here by repeating the example in Kelly's original paper [5], which is worth working out in full detail, as the original exposition is rather terse, omitting, however, the extra information provided by the “private wire” considered therein, which is an additional complication not directly relevant to our discussion.

3.1. Description of a General Game and the Classical Approach

Consider a random variable with possible mutually exclusive outcomes with probabilities , . Assume further that the odds placed on are , whereby it is meant that, if a player places a bet on , and is indeed the outcome, he will receive a wealth of back including the original bet (hence the use of the new symbol “” instead of the previous “:”); otherwise the bet is lost. How should the various be determined?

One possible approach is to maximize the expected wealth: assuming that the player's initial total wealth is , the total wealth after betting and assuming outcome is (clearly ); hence the expected wealth after the game is We observe that Assuming for some , for any bet with , the new bet with all , left unchanged and , is at least as profitable; hence the optimal bet can be taken to have . Focusing on an such that , for any bet with , the bet with all , left unchanged and , is at least as profitable; hence the optimal bet can be taken to have . Furthermore, assuming that and , decreasing , and increasing by such that they both remain between and is again at least as profitable. We conclude that, assuming that there exists an such that , the optimal bet is to set , , , where is taken to be the smallest of all such that ,

This strategy is, however, highly risky, as, with probability the bet is lost and the player is ruined. Furthermore, the probability that the player is not ruined after rounds of the game is ; assuming that (otherwise there is really no element of randomness in the game), , so eventually the player gets certainly ruined. This phenomenon is known as “gambler's ruin.”

Note that, without loss of generality, we may consider that This is because, even if the player wishes to save an amount of money , he may equivalently bet on outcome . Indeed, (3.3) shows that this betting scheme is possible since , and the return is always , and, in the case of strict inequality, it even leads to a certain extra gain of . When , on the other hand, this betting scheme leads to certain loss (unfair odds); hence it may make sense for the player to actually save part of his wealth and bet the rest. This scenario is the most frequent in actual betting schemes, and the money lost to the player accounts for the commission (the “track take”) of the bet authority (the “bookie”).

3.2. A New Approach: Kelly's Criterion

KC suggests maximizing the exponential growth factor of the wealth, or, equivalently, the log-return of the game: For the discussion that follows, we assume (3.3), so we seek to maximize To achieve this, we use Lagrange multipliers and maximize In other words, bets are proportional to the probabilities. In that case, In Information Theory, this quantity is known as the Kullback-Leibler distance (or information gain or relative entropy) [8] between the probability distribution and the function (in this order), , where and . We distinguish the following two cases. (i) (fair odds): then is also a probability distribution and [8], (ii) (superfair odds): then can be turned into a probability distribution (and this case reduced to the previous one) by considering the additional event “none of , , occurs” and assigning to it -probability . As this event has zero -probability to occur, it does not affect the player's betting strategy (note that, by convention, terms in the sum defining corresponding to are taken to be 0, which equals the limit value as [8]).

3.3. Unfair Odds

What happens when ? In this case, neither is nor can it be extended into a probability distribution; hence is not guaranteed to be positive, and, even if it is, this strategy may be suboptimal. An attempt to use Lagrange multipliers directly as above, even allowing for the possibility that a part of the initial wealth is saved, leads to , hence to no solution, assuming that all , . We therefore need to consider the possibility that zero bets get placed on some of the possible outcomes. To sum up, we need to maximize The fact that the log function is concave over the maximization region guarantees convergence to a global maximum. We observe, though, that some of the constraints are inequalities rather than equalities, and dealing with such constraints requires the use of a generalization of the Lagrangian method of multipliers, known as the Karush-Kuhn-Tucker (KKT) equations [9]: we form the functional which we now attempt to maximize. A stipulation of KKT theory is that the coefficients corresponding to inequality constraints must carry the sign of the inequality, and that, if the inequality is strictly satisfied at the point of optimality, the coefficient must be zero: specifically, , and either or else , ; the case for and is similar, but, as we established above, the optimal bet necessarily has ; hence . Taking the partial derivatives yields We now define , whence it follows that

Setting and we obtain Hence, These conditions are enough to determine unambiguously. To begin with, assume, without loss of generality, that the outcomes are so ordered that is a decreasing function of : then, for some ( stands for ). Now define, and note that . Assuming that , it follows that and that no bets are placed. In any case, and , where is the smallest such that . Note that (as noted in [5]), compared to a classical player who avoids betting on outcomes for which the odds are unfavorable, namely, for which , a player following KC does bet on such outcomes, as long as .

As a historical note, let us mention that the KKT theory, formulated in 1951, predates KC, published in 1956. Unfortunately, (3.11) and (3.12) were given in [5] without even a hint as to how they were obtained, so we can only assume that the KKT theory was used.

3.4. Feasibility

Though KC suggests a betting strategy that is both optimal and avoids gambler's ruin, in many practical games the rules prohibit its application, and some approximation is required. To demonstrate the main issues, let us continue with the example of the random game of possible mutually exclusive outcomes we have been studying in this section: the optimal betting strategy suggested by KC regards the version of the game, henceforth labeled , where the player has the right to place simultaneous bets, one on each possible outcome. Alternatively, however, a player may be restricted by the rules to place a (possibly negative) bet on one outcome of his choice only, negative bets signifying bets on the complementary outcome; we label this version . Finally, a player may be restricted by the rules to place a (nonnegative) bet on one outcome only predetermined by the rules; we label this version .

As a concrete example, consider the game of rolling two fair dice and betting on the sum of their outcomes, which ranges from 2 to 12. Our analysis above concerned , where the player is allowed to place 11 simultaneous bets, one on each possible outcome of the sum. Under , the player would be restricted into placing a bet on only one outcome of his choice; for example, that the sum will (or will not) be 8. Finally, under , the player would be restricted into placing a bet on the outcome, for example, that the sum will be 8, assuming that the rules restricted betting to this particular value of the sum and no other.

When , and under simple returns, and are essentially the same, except for the fact that in negative bets are allowed; note, indeed, that a negative bet for an outcome translates into a positive bet for its complement. In practice, hardly any game is or : imagine, for example, a player playing Blackjack and betting on the outcome that the dealer has a higher hand than him!

As another example, in the party version of the game of Betweenies, given the player's hand, the probabilities that the third card dealt will or will not fall strictly between the cards of the hand can be computed, and they clearly add up to 1; therefore, this game is clearly an instance of the general game described in Section 3.1 for . Applying KC, however, presupposes a game, namely, that the player is able to place bets simultaneously on either possible outcome and that the third card either will or will not lie strictly between the two cards of the hand, respectively, and game rules allow betting only on the former event, not on the latter. KC can still be applied in a modified form, allowing only part of the player's wealth to be placed in bets while saving the rest, but the feasible betting strategy so obtained (which is the main object of this work and is studied in detail in the next sections) will be suboptimal.

Note that this case, where betting is restricted by the rules of the game to certain outcomes only, should not be confused with the unfair odds case in Section 3.3, despite the similarity due to the common feature that only part of the wealth is actually waged. In that section, the player was still allowed to bet on all possible outcomes. In particular, the analysis carried out in that section is not relevant for the scenario just described.

3.5. Criticism

The most important feature of KC to keep in mind is that the betting strategy it proposes maximizes the player's wealth in the long run, but it normally achieves this through highly volatile short-term outcomes [6]. Given, however, the finite span of human life and human nature more generally, many might find it preferable to trade the optimal but highly volatile eventual growth of wealth achieved by KC for a suboptimal growth, as long as it is also less volatile in the short or medium term. This is normally achieved by “underbetting,” namely, placing bets equal to a fraction of what KC proposes.

4. Party Version: The Probabilities

The probabilistic analysis of Betweenies naturally breaks down in two stages: first, the probabilities that a player be dealt any specific hand of two cards must be determined; then, the probability of victory given any dealt hand of two cards must be determined.

4.1. Hand Probabilities

Let denote the event that the two cards dealt have value and , , . We set . Note that, unless or , the order in which the two cards are dealt is irrelevant for determining ; furthermore, the order is always irrelevant for determining the conditional winning probability given .

Assuming then that and , as the first card can be chosen in 4 ways, as can the second, while the totality of possible pair choices is , the order being unimportant.

Assuming now that and , as the first card can be chosen in 4 ways (out of 52 possible cards) and the second in 3 ways (out of 51 possible cards).

When aces are present, things get complicated by the low-high option. Let be the probability that the player declares the first ace card (if such a card be indeed dealt) to be high. Then, for , as the first ace can be chosen in 4 ways (out of 52 possible cards), declared low with probability , and the second non-ace in 4 ways (out of 51 possible cards). Similarly, is the probability that two aces are dealt and that the first is declared low: Furthermore, is the probability that two aces are dealt and that the first is declared high: while . Finally, , , is the result of two possible and mutually exclusive scenarios: either the first card dealt is an ace declared high, or the second card is an ace. It follows that

4.2. Probability of Victory Given a Certain Hand

Let denote the event of victory. We set to be the probability of victory given a certain hand. We observe outright that , as there is no card strictly in between the dealt cards in these two cases. In all other cases, there are exactly cards in between two cards of value and , ; hence where and , ranging from 0 to 12 inclusive, is set to be the spread of the hand. Note that, by redefining , in (4.7) can be extended to cover the case where as well.

4.3. Certain Loss and Spread Probabilities

There is clearly no point in betting when . How often does this occur? Letting denote the probability of , it follows that

We see that is minimized for . This is to be expected: assuming that a player is dealt an ace in the first card, there is no point, in the absence of further information, in declaring it high, as then the player forfeits the possibility of obtaining the strongest possible hand if a second ace is dealt, without gaining any advantage. We will henceforth assume that , in which case . Hence, in approximately one turn out of five the player has no chance to win.

Note that we do not imply that the player should invariably use , but rather just in the general scenario studied here. Given further information, using may actually be advantageous. For example, consider a game of many (say 15) players where the first card has been dealt (to all players), and suppose that a player has been dealt an ace, while all other players have been dealt high-value cards and aces. What should the player do? It is rather improbable to receive a second ace under the circumstances, while the remaining deck is now certainly biased towards low-value cards. The best chance may be then to declare the ace high, expecting a low-value second card, in order to establish a wide spread. Using information gathered from used cards accounting for a considerable portion of the deck is frequently referred to as “end play,” and has been shown to be very profitable in other card games as well, for example, Blackjack.

We may similarly consider , namely, the probability that the spread equals , for . To begin with, . Otherwise, for , we may write where we observe, using the results of Section 4.1, that the first and last terms have probability each, while the middle terms have probability each. Therefore, We indeed verify, as expected, that .

5. Party Version: Strategy, Bets, and Ruin

5.1. Optimal Betting Strategy and Zero Bets

We will now determine the optimal betting strategy for a given hand using KC, as described in Section 3.2. Assume that the hand dealt has spread , and that the player places a bet , while the player's total wealth in the beginning of the round is . Note that, in accordance with the discussion in Section 3.4, it makes sense to consider that the amount placed as a bet is, in general, less than the total wealth , because, given the hand dealt, the player is not allowed to place bets on both possible outcomes and simultaneously, but only on one, namely, .

More specifically, at the end of the round, the player's wealth is with probability and with probability . The expected log-return of the game is then where , , and we seek to maximize this quantity over . Setting the derivative to 0 yields

Note that , so the suggested bet is always less than the current wealth , hence possible, in principle. In practice, however, there are two further conceivable complications in applying KC as described in Section 3.2 the follows. (i)If the restriction imposed by the available pot amount becomes significant, namely , the player uses instead. (ii)If , KC essentially suggests betting on , which would be a valid suggestion assuming a game, according to the classification of Section 3.4: for example, if , KC suggests that the player should place a bet of that the third card will not fall between the two cards dealt. Unfortunately, in the context of a game, such as the one under consideration, such a bet is not allowed, and the player is forced to use instead.

In particular, the formula indicates that if and only if ; hence a player following KC places zero bets with probability KC, then, makes Betweenies a very boring game indeed, as approximately four times out of five players place zero bets!

5.2. Mean Wealth

Assuming that the player places a bet under KC and spread , and using (4.7) and (5.2), the mean earnings will be For , . The mean wealth after a round is, therefore, It is worth playing the game if and only if Note that we have implicitly assumed that (so that arbitrarily large bets are possible) and that any value of is allowed (as opposed to integer values only).

Alternatively, we may consider the mean wealth after a round, given that a nonzero bet was placed in that round: which suggests that, given that a bet is placed, a mean return of approximately 16.8% is expected, assuming that .

In order to consider how the mean wealth varies over different rounds, we let be the player's wealth at the end of the th round ( with probability 1); (5.5) shows that where , the mean value of given , is a new random variable depending on . Taking anew the mean value of this formula over the distribution of , and setting , we obtain This recursion is readily solvable:

5.3. Probability of Ultimate Ruin

We mentioned above that if the player's original wealth is then it is worth playing the game, as the average wealth after one round is greater than . We caution the reader that this means that if the player plays “many times” one round only, each time starting with , then “on average” . In particular, this statement should not be misunderstood to mean that if the player plays many rounds in a row, each time using as the new , then the final expected wealth will be (unboundedly) greater than the starting wealth, provided the latter was at least . A numerical simulation of the game shows, in fact, that starting with and playing successive rounds till either 1000 rounds are reached or is reached, the player goes broke with a probability of approximately ! On the other hand, for a starting , the simulation shows that the player goes broke with a probability of only 4.7%, but otherwise accumulates an average wealth of at the end of 1000 rounds.

Suppose that the player's intention is to play successive rounds of the game in order to eventually accumulate wealth (unboundedly) greater than the original wealth: the relevant quantity to consider is the probability of ruin , namely, the probability that the total accumulated wealth becomes 0 or less after playing the game for any finite number of rounds, provided the original wealth was . In order to study , we observe that a Markov chain can, in fact, model the game. Its states are all possible (nonnegative) values of wealth , and state actually stands for “zero wealth or debt” and is absorbing: assuming a player reaches this state, reaching positive wealth ever again is impossible. Furthermore, all states lead with certainty to because of the contribution of a unit of wealth in the beginning of each round.

In order to both simplify analysis and make it more realistic at the same time, we will henceforth assume that wealth and bets can only be multiples of 1 (namely, integers), and that bets suggested by (5.2) get rounded to the nearest integer. In that case, the Markov chain modeling the game has only integer states . The ruin probabilities are now determined as solutions to the system where denotes the transition probability from wealth to wealth in one round.

To determine , note first of all that : it is easy to see that already signifies ruin, and that leads immediately to ruin because of the unit contribution to the pot. The case of is a bit more involved: after the unit contribution to the pot in the beginning of the round, the player is left with one unit of wealth, so the only possible bets are (which surely leads to ruin in the next round), or , which, in case of victory, will allow the player to regain at the end of the round. What is this probability of victory? First, the player needs sufficient spread so that Therefore, we see that which is (much) less than 1. Hence, the probability that the player will still have after rounds is , and as , so avoiding ruin with initial wealth is an event of zero probability. To conclude, , , and .

For all other values of , the allowed values of are occurring with respective probabilities For each , then, for at most 13 values of ; hence the (infinite) array is extremely sparse.

In order to compute an approximation of numerically, system (5.11) can be approximated by a finite system of equations. To do this, we choose , we determine such that, (this clearly must be done through an alternative method that estimates , e.g., simulation), and we set , thus confining our attention to the square array , . For example, simulations suggest that for . Using , we produce Figure 1, where we plot for in linear scale and for in log scale.

Figure 1 further suggests that, asymptotically, for some constant . Indeed, writing out (5.11) in full, Assume now that , so that the effect of rounding can be ignored: then, Substituting and (5.17) in (5.16), we obtain which verifies our asymptotical choice. Note, however, that this provides no information about . It further follows by the data that behaves rather like a slowly varying function of : for some and some function such that as .

Our results indicate that is the smallest value of such that : at that value, nonruin becomes more likely than ruin. Furthermore, , 4629, and 39484 are the first values of such that , 0.01, and 0.001, respectively. On the other hand, assuming that , which is the smallest value for which ruin is not certain, the probability to avoid ruin is only about .

6. Casino Version: The Probabilities

From now on, we consider all options mentioned in the list of Section 2 turned on.

Assuming decks of cards, the probability for a hand , , (remember that “high” aces of value 14 are no longer an option here) is This is actually considerably simpler than the party version.

We define the spread of the hand , , as (we shall see that it now makes sense to distinguish between equal cards and consecutive cards; hence is allowed). It follows that the probability of winning given a spread is; Indeed, assuming , there are favorable cards out of the total remaining cards to draw from, while, if , the two cards are the same and there are favorable cards (of the same value) to draw from.

The probability that a hand has spread is computed as follows:

Unlike the party version, there is no possibility of certain loss here: when there is no opportunity for gain, but the player still incurs no losses.

7. Casino Version: Strategy, Bets, and Ruin

7.1. Optimal Betting Strategy

Once more, following the discussion of Section 5.1, we determine the optimal betting strategy for a given hand using KC. Assume that the hand dealt has spread , and that the player places a bet , while the player's total wealth in the beginning of the round is . At the end of the round, the player's wealth is as follows (i) with probability and with probability , assuming that , (ii) with probability and with probability , assuming that , (iii) with probability and with probability , assuming that , (iv) with probability and with probability , assuming that , (v) with probability 1, assuming that ; (vi) with probability and with probability , assuming .

In general, then, assuming that , the player's wealth at the end of the round is of the form for some in case of victory and in case of loss; for the strategy is irrelevant and we may set . The expected log-return of the game becomes where , and we seek to maximize this quantity over . Setting the derivative to 0 yields Writing out the various cases explicitly We see that most values of lead to , in which case, as in the party version, the player does not bet. Actual bets take place if and only if independently of , just like the party version. To sum up, a player following KC places zero bets with probability which equals for (as determined before in the party version), but asymptotically approaches as increases.

To sum up, we observe that, despite the many superficial differences between the casino and the party versions, the betting strategy for both, under KC, is exactly the same. In particular, the increased odds considered for the rarer cases are not increased enough to have an impact on the betting strategy. More specifically, considering that , KC would return if and only if namely, for , for , and eventually for . Considering that , if and only if for all , namely, , 6, 4, 3, 2, and 2 for , respectively.

7.2. Mean Wealth

Since the betting strategy is identical in the party and casino versions, the analysis of Section 5.2 becomes relevant here as well. The mean wealth after one round given a hand of spread and initial wealth is while, for , . The mean wealth after a round is, therefore, This fraction is equal to for , and it eventually decreases towards as increases.

As is expressed as a fraction of not involving (contrary to the party version where the corresponding fraction does involve ), there is no possibility of ruin (assuming infinitely that small bets are possible).

8. Examples of Gameplay and Short-Term Considerations

Figure 2 shows two actual games of the party version, as described in Section 4, one of which resulted in ruin and one in a very large wealth over 1000 rounds (which could have been even larger had it not been for a large lost bet in the final rounds). We assume that the player faced infinitely wealthy opponents and that the pot was infinite, which is the same as saying that the player played solo with an infinite pot. The player's initial wealth was taken to be 50. The simulation on the left is typical of games resulting in ruin and gives some insight into the mechanisms that cause ruin. More specifically, we see that the player started by doing well, and three successful bets in rounds 61, 65, and 71 boosted his wealth to 386, an almost eight-fold increase. In round 77, however, an unsuccessful large bet reduced his wealth to 152, followed by two more unsuccessful sizeable bets in rounds 78 and 79, finally reducing his wealth to a meager 74. In short, ruin was partly caused by large unsuccessful bets, namely, localized large losses.

Furthermore, long periods where the wealth slope is equal to are clearly visible in the figure, and they correspond to the periods where the player places no bets due to unfavorable hands, but pays the ante in the beginning of each round. This gradual loss of small amounts of wealth over an extended period of time becomes significant when the number of rounds is much larger than the initial wealth, and is the second cause of ruin.

These considerations bring us back to the criticism of KC in Section 3.5: is it possible to reduce uncertainty of wealth growth conceding a decrease of the expected final wealth? Let us consider the following two sets of simulations, each consisting of 10,000 games of (at most) 100 rounds each, and where the initial wealth is always . In the first set bets are placed according to KC: the probability of ruin is 52.13%, the mean wealth is 525.9, and the probability of no gain (end wealth is less than or equal to the initial wealth) is 60.55%, but the median wealth assuming no loss is 392 and the 5% and 95% quantiles lie at 67 and 4,699, respectively. In the second set the bets placed are double the bets suggested by KC, but the player has moderate expectations and withdraws at any point his wealth exceeds 80: the probability of ruin is 35.62%, the mean wealth is 64.34, the probability of no gain is 35.72%, but the median wealth assuming no loss is 94 and the 5% and 95% quantiles lie at 82 and 138, respectively.

Violating KC and hyperbetting at low wealth allows the player to leave faster the low-wealth zone where ruin is likely to occur due to gradual loss of wealth. Alas, it makes ruin due to sudden loss of wealth much more likely when wealth is high and bets are high... except that now the player pulls out of the game before such high values of wealth are reached! This strategy then outperforms KC in short term, leading to smaller probability of ruin and loss: gain is now small but almost certain, and its effective value is constrained in a much narrower range than before, so that the variation in the expected wealth is smaller. Of course, if the player gets greedy and overbets without withdrawing when wealth exceeds 80, the probability of ruin increases to 77.83%.

9. Summary and Conclusion

We described two variations (called the party and the casino version) of the card game of Betweenies (also known by several other names), in which the player bets on whether the value of the card he is about to receive will lie between the two cards he has already been dealt. After a brief introduction to Kelly's criterion (KC), a method to determine the optimal betting scheme in a game of chance based on Information Theory, in a sense that the logarithm of the ratio of the player's wealth before and after the game is maximized, we applied it to both described versions of the game, essentially coming up with the same betting strategy in both cases. In the party version, where every player is required to contribute a fixed amount of money to the pot in the beginning of each round, there is an initial wealth-dependent probability of ruin, which we studied in two ways: by simulation and by solving the equations numerically. The latter method suggested that the probability of ruin is asymptotically inversely proportional to the initial wealth, and we verified this by direct substitution in the equations.

We finally provided some simulations of gameplay of Betweenies (one leading to ruin and another to a large wealth), assuming that players follow the strategy laid down by KC, and we demonstrated how alternative strategies can perform better in short term, by reducing the probability of ruin and increasing the probability of gain, albeit reducing the expected gain. We also showed that, in the long term, such strategies increase the probability of ruin.