Research Article  Open Access
Liang Wang, Yu Wang, Yan Li, "Mining Experiential Patterns from GameLogs of Board Game", International Journal of Computer Games Technology, vol. 2015, Article ID 576201, 20 pages, 2015. https://doi.org/10.1155/2015/576201
Mining Experiential Patterns from GameLogs of Board Game
Abstract
In board games, gamelogs record past game processes, which can be regarded as an accumulation of experience. Similar to a real person, a computer player can gradually increase its skill by learning from gamelogs. Therefore, the game becomes more interesting. This paper proposes an extensible approach to mine experiential patterns from increasing gamelogs. The computer player improves its strategies by utilizing these growing patterns, just as it acquires experience. To evaluate the effect and performance of the approach, we designed a sample board game as a test platform and elaborated an experiment consisting of a series of tests. Experimental results show that our approach is effective and efficient.
1. Introduction
Gaming is one of the main intellections in our daily life. To some extent, gaming shows the wisdom of humankind more directly. Artificial intelligence for games (Game AI) is an important subfield of AI research. The current generations of computer games offer an interesting testbed for Game AI [1]. Meanwhile, applying AI techniques to game design will make the games more interesting. The research of Game AI plays an important role in promoting the development of the computer game industry.
Board game is one of the earliest objects that AI focused on [2, 3]. By contrast with other types of games, board games reflect human’s intelligence more purely and thus attract the interest of AI researchers [4, 5]. At first, researchers concentrated on the algorithm design about the chess road searching and the chess manual matching and made some important progresses. John von Neumann presented the Minimax algorithm in 1920s [6]. Shannon [2] proposed the game tree searching algorithm with his chess program in 1950. Knuth and Moore deeply analyzed the alphabeta pruning algorithm in 1958 [7]. Then, studies on AI in board games were widely launched [8–13]. Many contests of board games between machines and real persons showed the advances in AI research [14–16]. Recently, researchers find that gamelogs (game records) cover a lot of important and interesting knowledge. Chen et al. [17] proposed an approach to abstract expert knowledge from annotated Chinese chess game records. GO game records are used for the life and death predicting, the pattern acquisition, and the pattern matching [18–20]. Esaki and Hashiyama [21] carried out some experiments to extract human players’ strategies from the game records of Shogi. Weber and Mateas [22] used a data mining approach to process gamelogs for the strategy prediction. Takeuchi et al. [23] presented a way of evaluating game tree search methods by game records. Moreover, Wender [24] expounded how to use data mining and machine learning techniques to analyze the gamelogs in his project.
The common design of board games is to factitiously set the difficulty level on an invincible computer player, but this way is just a simulation. However, human players often expect a computer player to act and think as a real person rather than a superman. Gamelogs in a board game are considered as cumulative experience, which will become richer and more effective with the increase of game times. With the accumulation of the experience, computer players can gradually enhance their intelligence level, just like human beings. Data mining techniques can be used to extract experience from gamelogs. Nevertheless, there are few works on this topic.
In this paper, an approach is presented to mine experiential patterns from the growing gamelogs of a sample board game. The computer player can learn from these patterns to improve its intelligence naturally. Firstly, a Chinese checkers game was designed as a test platform for collecting gamelogs and launching empirical studies. Then, series of algorithms based on a sequencetree were proposed to mine experiential patterns from gamelogs. These patterns included Experience Rules, Key States, and Checker Usage. Finally, the mined patterns were applied back to the test platform, and then a series of experimental tests was elaborated to verify the effectiveness and performance of our approach.
Experimental results demonstrate that our approach is effective and efficient and easy to be transplanted to fit for other types of games.
2. Methods
The application of the experiential pattern mining requires two basic conditions listed below.(i)The computer players of a game have the minimum ability to play the game correctly without any experiential pattern. Doing so ensures that the game can run properly in the initial stage.(ii)There are sufficient and valid gamelogs to make the pattern mining task runnable. Only when the number of gamelogs reaches a certain size can the regularity hidden in them appear. In addition, our purpose is to make computer players learn from human players’ strategies. So, these gamelogs should be from human players’ games.
To satisfy the above conditions, we designed a simplified Chinese checkers game as a test platform to provide the basic move algorithms and collect the gamelogs. Subsequent sections cover the following topics in detail:(a)the concepts concerned with the game and the mining algorithms;(b)the algorithms for mining experiential patterns;(c)the methods for applying experiential patterns to the game.
2.1. Key Concepts
The traditional Chinese checkers game has flexible gameplay (see [25] for more details). In this work, the game was simplified to be more convenient for experimenting (the design details are provided in Appendix). The key concepts used in the following sections are briefly explained below.
(i) Checker. The checker means the position used to put pieces on the chessboard. Each checker has its own coordinate.
(ii) CheckersState. The checkersstate describes all the checkers on the chessboard and can be traversed to get or update the information of every checker.
(iii) Piece. Each player holds ten pieces. The coordinate of a piece is dependent on the current position of the piece.
(iv) PiecesState. Piecesstates are used to store the coordinates of pieces. A piecesstate is a set that consists of several subsets. The number of the subsets depends on how many players are there in the game. Each subset contains ten elements, which correspond to the ten pieces of a player. For 2player mode, there are two subsets, called the active side (SAS) and the passive side (SPS), respectively.
(v) GameLog. During gaming, once a piece moves a step, the test platform will save the corresponding piecesstate to a gamelog. The piecesstates in a gamelog are sorted by their creation time. Hence, a gamelog is a sequence of piecesstates. Separate XML files are used to store the gamelogs.
(vi) Experiential Pattern. Experiential patterns are mined from gamelogs and easily used by computer players. In this work, we considered three kinds of experiential patterns, including Experience Rule, Key State, and Checker Usage.
An Experience Rule is similar to an ifthen rule. If piecesstate appears, then the player should achieve the piecesstate by moving a piece. The rule can be represented as , where is called last state and next state.
Key States are a group of particular piecesstates. They appear in victor games frequently and thus indicate some good game situation. By algorithm designing, one can help the computer player in approaching these Key States. Furthermore, approaching the Key States may also promote the applying of some Experience Rules, because some Key States may just be the last states or the next states.
Checker Usage describes the usage of a checker and is denoted by . The of a checker can be calculated by where represents the of checker , holds the times that has been used as the falling checker, and stores the number of gamelogs in database.
2.2. Mining Experiential Patterns
By using some traditional move algorithms, the computer player may figure out the best move within several rounds. However, being subject to the computing capability of the computer and the requirement of game runtime, the computer player is hard to make more skillful moves or longer layouts. Humans improve their chess skill by accumulating the playing experience. A computer player can simulate this process to study experience from humans’ excellent games and thus make the moves more strategic. The three experiential patterns introduced in Section 2.1 are mined from the gamelogs generated in the games of humanhuman mode. The data mining process is run in serverside, and its workflow is shown in Figure 1.
As shown in Figure 1, the data mining process consists of three phases, including data extraction, data mining, and data representation.
2.2.1. Data Extraction
Each game makes a single gamelog file, and this will generate lots of XML files stored in serverside. These XML files are not suitable to be the direct data source of data mining. Therefore, the sequential data with consistent format needs to be extracted from gamelogs and stored in database first.
Three data entities, PiecesState, PiecesStateSequence, and CheckerUsage, are used to format the gamelogs. Their attributes are, respectively, shown in Tables 1, 2, and 3. The piecesstates in PiecesState are distinct, and each of them has a unique ID. The sequences in PiecesStateSequence are repeatable, and each of them corresponds to a gamelog. The tuples in CheckerUsage have the same number as the checkers on the chessboard.



Algorithm 1 describes the process of data extraction.

In Algorithm 1, lines (29)–(37) add a win/loss mark to the nondraw game and use minus to mark the piecesstates created by the loss player. Lines (38)–(41) combine all the PiecesState IDs from the same game into a single sequential string.
2.2.2. Data Mining
Experiential pattern mining means finding useful patterns from a large number of sequences composed of piecesstates. The types of these patterns include but are not limited to those mentioned in Section 2.1. Mining the Experience Rules, which are the most important experiential patterns in this work, is an objective of sequential pattern mining. Sequential pattern mining is a topic of data mining concerned with finding statistically relevant patterns between data examples where the values are delivered in a sequence [26]. In this field, many efforts of mining sequential patterns have been devoted to developing efficient algorithms, such as GSP [27], SPADE [28], CloSpan [29], PrefixSpan [30], and MEMISP [31]. In recent years, researchers have paid more attention to the applied research of sequential pattern mining as discussed elsewhere [32–36].
In this section, we describe the methods for mining the three kinds of experiential patterns explained in Section 2.1.
Experience Rule. Experience Rules describe the experience of the human players most directly. The problem of mining Experience Rules can be described as follows. Let be a set of items, where each item is a sequence of piecesstates. Given a minimum support threshold minSupport, the aim is to find out all the consecutive binary subsequences (CBSSs) whose frequency is not less than minSupport. Each CBSS consists of two piecesstates that are consecutive in the original sequence, and this can be regarded as a constraint in data mining. In this paper, an algorithm based on patterngrowth is proposed for mining CBSSs. This algorithm uses a sequencetree (see Figure 2) as the data structure to load all the sequences in database. The sequencetree is different from the FPtree [37] of frequent pattern mining, and we do not have to scan database previously to obtain the supports of every piecesstate. Algorithm 2 describes the procedure of creating a sequencetree.

In Figure 2, value stores the ID of the piecesstate, weight represents the support of the related CBSS, and next stores the linkage pointing to the next node.
On a certain player’s perspective, if two same piecesstates appear, the moves between the two piecesstates will become noneffective interference data. Eliminating these data will reduce the unnecessary data analyses and improve the reliability of mining results (see lines (05)–(13) in Algorithm 2).
Not all the frequent CBSSs are good, since some of them may be from losing games. The support count of a CBSS denoted by suppCount can be calculated by where posSuppCount is the support count of the CBSS generated in winning games and negSuppCount in losing games.
By means of the win/loss mark, the suppCount of each CBSS has been calculated during the process of creating the sequencetree. Hence, the weight of each branch is equal to the suppCount of the relevant CBSS. Moreover, the same pairs of nodes are not allowed to appear on the same sequencetree; otherwise the suppCount values will be incomputable (as illustrated in Figure 3). To solve this problem, the algorithm for shifting branches is designed (see Algorithm 3).

To obtain the frequent CBSSs, the following formula is given: where supp denotes the support of a CBSS and totalSeqs denotes the total number of the sequences in database. Given a minSupport, the frequent CBSSs are those whose supports are not less than minSupport. All the frequent CBSSs can be found out by traversing the sequencetree only once, and the script to do so is shown in Algorithm 4.

In Algorithm 4, the structure of the object SequentialPattern is shown in Figure 4. The found CBSSs will serve as the data source for selecting Experience Rules.
Key State. Given a threshold percentage minFreq, Key States are those piecesstates of the frequencies not less than minFreq. The frequency (denoted by freq) of a piecesstate can be calculated by where stores the total number of sequences in database and count denotes the occurrence number of the piecesstate in the net winning games. The value of count is figured out during the process of data extraction (see lines (17) and (23) in Algorithm 1).
In respect of threshold selection, adjusting minFreq dynamically in accordance with the number of Experience Rules can make the Key States more efficient.
Checker Usage. This pattern can be calculated by formula (1), in which the is figured out during the process of data extraction (see line (12) in Algorithm 1).
2.2.3. Data Representation
In this phase, the main task is to make the mined experiential patterns recognizable to the computer player. The experiential pattern here mainly refers to Experience Rule. The following format is used to represent an Experience Rule:
According to the data structure of the Experience Rule, the process of data representation can be described as follows. Replace the IDs, which denote the last states or next states of Experience Rules, with the corresponding piecesstates, and then write these piecesstates to an XML file in serverside.
However, the mined patterns may not be all reliable. For example, given a minSupport, the obtained Experience Rules are as follows: That is, one last state has multiple next states. When piecesstate appears, it is clearly reliable to select the piecesstate , which is of the highest support. Hence, the reliability selection should be done before data presentation. Algorithm 5 shows this process.

2.3. Applying Experiential Patterns
The way of applying experiential patterns depends on the specific gameplay. In this section, we give a computer player’s strategy as shown in Figure 5 and then explain how to apply the mined experiential patterns to the actual game.
In the course of the game, the computer player uses Experience Rules first. Algorithm 6 demonstrates the procedure of matching Experience Rules.

When there are no available Experience Rules, the computer player will automatically approximate a certain Key State. Doing so can promote the formation of some good game situation and may activate some Experience Rules. Obviously, the approached Key State is the nearest piecesstate after the current one. So, all of the Key States are sorted by their occurrence time. Let and be two different piecesstates. Their order is defined below.
Definition 1. Assume that . If and , then is behind , which can be expressed as .
The concept of dissimilarity (denoted by Diss) is used to represent the closeness between two piecesstates. The Diss between and is equal to the least moves from to . The less the value of Diss, the closer the two piecesstates. For calculating Diss, the following definitions are given.
Definition 2. Let be a piece and let be a set of piecesstates. If the coordinate of is included in , then belongs to , which can be expressed as .
Definition 3. Let and be two checkers, and their coordinates, respectively, and . Then, the minimal number of steps for a move from to is called move distance from to and denoted by , which can be calculated by where the function is defined as
Definition 4. Let and be two pieces, and their checkers, respectively, and two piecesstates, , and . Then, the minimal number of steps for a move from to is called the Distance in Active Side between and , which is denoted by . Similarly, one can define the Distance in Passive Side between and , which is denoted by .
Algorithm 7 demonstrates the calculation of the distance between two pieces.

Let and be two piecesstates. According to Definitions 2–4, the dissimilarity matrices between and are defined as follows: where DissMatrixAS is defined on the active side and DissMatrixPS on the passive side. Then, the Diss between two pieces can be calculated by where and satisfy the following conditions, respectively:
It follows that the calculation of the Diss between pieces is a typical assignment problem in linear programming. This problem can be solved by Hungarian algorithm as discussed by Edmonds [38].
When there are no Key States behind the current piecesstate or the Diss between the nearest Key State and the current piecesstate is more than a given threshold, the computer player will execute a tentative move algorithm named TentativeMove, which was designed based on the traditional alphabeta pruning algorithm [7] (see Appendix for more details). Moreover, the pattern Checker Usage is one of the parameters of TentativeMove.
3. Results and Discussions
In order to evaluate the effect and performance of our proposed approach, we conducted an experiment on our designed test platform mentioned in Section 2. The experiment contains a series of tests. To maintain the objectivity of the experiment, we chose the same group of testers, who are ten experienced human players.
3.1. Test in HumanComputer Mode without any Experiential Patterns
Each tester played against the computer player for 30 games, serving as offensive player (OffP) and defensive player (DefP) alternately. The computer player was only allowed to use TentativeMove using a default value of Checker Usage as the parameter. A total of 300 gamelogs in this test were recorded. The test results are shown in Table 4.
 
Note: Gts: game times; Role: role of the computer player; Wts: winning times of the computer player; Anrs: average number of the rounds per game; and Wr: winning rate of the computer player. 
Furthermore, we observed the performance of our algorithms by using the parameter ATMC (the average time for each move of the computer player). A script embedded into the program was used to calculate the response time of the computer player during testing and give the value of ATMC in the end. In this test, the value of ATMC was 5.85 s (s = seconds).
Descriptions of Effects. (a) The computer player has the ability to play correctly; (b) the winning rate of the computer player in competition with every tester is not more than a half; (c) the computer player spends a little long time to move a piece.
Discussions. (a) Condition (i) mentioned at the beginning of Section 2 is satisfied by the TentativeMove algorithm; (b) the moves driven by TentativeMove may not be optimal due to the error of its evaluation function and the restraint of searching depth, which affects the winning rate of the computer player; (c) TentativeMove has a deeper recursion depth. As the game goes on, the checkers to be searched in each recursive layer will increase rapidly, which degrades the performance of TentativeMove.
3.2. Test in HumanHuman Mode for Collecting GameLogs
Each tester played with all the others for 30 games, serving as OffP and DefP alternately. A total of 1350 gamelogs in this test were recorded. This satisfied condition (ii) mentioned at the beginning of Section 2. Subsequently, additional tests in humancomputer mode were performed to select the appropriate minSupport and minFreq. These tests were similar to those in Section 3.1, and the results are shown in Table 5. Finally, the experiential pattern mining program was executed with the selected minSupport and minFreq. As a result, we gained 528 Experience Rules and 575 Key States, and the Checker Usage of every checker was updated in real time.
 
Note: Gts: game times; Ms: minSupport; Mf: minFreq; Ner: number of Experience Rules; Nks: number of Key States; Anep: average number of experiential patterns used in each game; and Auep: average usage of experiential patterns per game. 
As shown in Table 5, when minSupport was 2% and minFreq was 1%, the crest value of Auep was reached, which meant the mined experiential patterns were better utilized.
3.3. Test in HumanComputer Mode with Experiential Patterns
The test plan here was the same as that in Section 3.1, except for the applying of the experiential patterns obtained in the test mentioned in Section 3.2. The results are shown in Table 6, and the comparison with the test results in Section 3.1 is illustrated in Figure 6. In this test, the value of ATMC was 0.93 s.
 
Note: Gts: game times; Role: role of the computer player; Wts: winning times of the computer player; Anrs: average number of the rounds per game; and Wr: winning rate of the computer player. 
Descriptions of Effects. (a) Figure 6 shows that, by using experiential patterns, the computer player has increased its winning rates in most of the games at different degrees; (b) the average number of the rounds per game has increased; (c) the value of ATMC has declined dramatically.
Discussions. (a) The effectiveness of experiential patterns will become more obvious with the increase of gamelogs, and this will give the human players an impression that the computer player is learning from the experience; (b) in a few games such as 02OffP and 07DefP shown in Figure 6, although the computer player’s winning rates are not increased, the average number of the rounds per game is greatly raised (see the lines with bold font shown in Table 6). The testers obviously feel the improvement of the computer player’s chess skill; (c) if the Experience Rules exist, the computer player will match and use them first. Compared with TentativeMove, the pattern matching algorithm is of linear complexity and consumes less time (see Algorithm 6).
3.4. Test in ComputerComputer Mode
In this test, we set two computer players. One could use the experiential patterns and the other could not. They took turns to serve as OffP and played with each other for 30 games. The results are shown in Table 7.
 
Note: Wts: winning times; Anep: average number of experiential patterns used in each game; ATMC: value of ATMC; and Wr: winning rate. 
Description of the Effect. The computer player using experience patterns has a great advantage in winning rate and ATMC.
Discussion. Experiential patterns can help the computer player in making more strategic moves; still, we are aware of four losing games of CP1 (as shown in Table 7). This indicates that some of the experiential patterns may not be really good. However, the invalid patterns will die out with the growth of gamelogs.
3.5. Comparative Analysis on the Algorithms of Pattern Mining
A sequence database consists of sequences of ordered elements, and these elements contain some unordered items. The common objectives of the previous works [27–31] are to find the interesting relations between these items. In this work, however, the pattern mining algorithms aim at finding only the frequent CBSSs in the gamelogs. This can be considered as a special case of the sequential pattern mining on a sequence database. For ease of comparison, we implemented the traditional algorithm PrefixSpan and added some restraints to make it fit for mining in our gamelogs. These restraints were (a) setting length of each subsequence to 2 and (b) setting size of each element to 1. The comparative results are shown in Figure 7.
Here the minSupport was set to 2%, and the given data set contained 1,650 sequences (i.e., gamelogs), 155,100 elements (i.e., piecesstates). Figure 7 indicates that our algorithms compared with PrefixSpan are more efficient.
3.6. Discussion on the Using of GameLogs
Previous works [17–24] mainly discuss the methods of extracting expert knowledge from game records. The extracted knowledge is commonly used to construct a game agent of high intelligence. Differently, our work aims at helping a computer player in learning from growing gamelogs, acquiring human’s experience, and gradually becoming more skillful. It is a practical approach to make the game more fascinating. Furthermore, by satisfying the two conditions mentioned at the beginning of Section 2, any board game can utilize this approach after a minor adjustment. And this shows the extensibility of our approach.
4. Conclusions
In this paper, a novel approach is proposed to mine experiential patterns from the gamelogs of board game. Those experiential patterns can be utilized to improve the intelligence of a computer player. This approach makes computer players learn from human’s experience progressively during gaming and become more experienced with the increase of gamelogs. This will make the game more interesting. We conducted an experiment on our designed test platform of Chinese checkers game, and the results demonstrate that our approach is effective, efficient, and extensible.
Nevertheless, our approach still needs improvements from several aspects, such as the following:(i)designing algorithms to automatically adjust the parameters minSupport and minFreq;(ii)optimizing the related algorithms to improve their access performance for a huge sequencetree;(iii)researching the fast matching problem in massive experiential patterns;(iv)developing the approach for online analysis of gamelogs;(v)looking for more experiential patterns making games interesting.
Appendix
(1) Rules and Data Structures. The rules and data structures used for programming are described as follows.(i)Only use the 2player mode.(ii)Move rules: there are two move rules, shift and jump. The shift means moving a single piece one step in any direction to an adjacent empty checker. The jump here means singlepiecejump, which is the simplest and most usual rule. An example of singlepiecejump is shown in Figure 8.(iii)Checker: a computer can get the complete appearance of a game by scanning all the checkers on the chessboard. A checker has three properties including color, position, and status. The color stores a hexadecimal number to represent the color of the piece on the checker. The position stores the coordinate of the checker. The status uses the values of 1, −1, and 0 to describe who is on the checker.(iv)Chessboard: for the convenience of programming, the chessboard is designed to fit for 2player mode. That means the other four useless star corners on traditional chessboard are cut out (as illustrated in Figure 9). We used a coordinate system to locate the positions on the chessboard. This coordinate system is unfixed and has an included angle of 60°. Players have their own frames of axes with the same form (as shown in Figure 10). This design has two advantages. First, the space of the coordinate system is utilized sufficiently. Secondly, the coordinate discrepancy between checkers is eliminated naturally. Unavoidably, the unfixed coordinate system causes a checker to have dual coordinates (as shown in Figure 11). To solve this problem, it is necessary to consider one of the two frames of axes as the reference, which is called normal frame of axes (NFoA). The other one called reverse frame of axes (RFoA) can be converted to the NFoA with In formula (A.1), and are the coordinates in NFoA and RFoA, respectively.(v)Row: the row means the vertical distance from current checker to the origin of the coordinate system (as illustrated in Figure 12). For checker , the following equation holds: (vi)FD: the FD represents the distance moved forward by a piece. It can be calculated by where rowFC is the row of the falling checker and rowSC is the row of the starting checker. For example, if a player moves a piece from (1, 3) to (5, 6) in one step, then the FD of this move is 7 (as illustrated in Figure 13). Note that the FD is negative when moving a piece backward.(vii)Checkersstate: the checkersstate is an integer twodimensional array. The subscripts and values of the checkersstate represent the coordinates and statuses of the corresponding checkers, respectively.(viii)Piece: it is time consuming to scan all the 81 checkers at runtime. But on the contrary, there are only 20 pieces in a game. Scanning these pieces is efficient. The piece has the same properties as the checker.(ix)Piecesstate: a piecesstate is a special set consisting of two subsets; one is active side and the other is passive side. Each subset has ten elements representing the coordinates of pieces as illustrated in Figure 14. In active side, the coordinates belong to the pieces of the player whose last move has led to this piecesstate. These pieces are sorted by their row values in ascending order (sorted by axis when they are in the same row). The situation in passive side is just the opposite. The piecesstate can be formally defined as follows: By using formula (A.1), a piecesstate in RFoA can be easily transposed to that in NFoA.(a)
(b)
(a)
(b)
(a)
(b)
Taking the humanhuman mode as an example, the gameplay is shown in Figure 15. And the other two play modes can be designed in a similar way.
In the gameplay, there are three problems to be solved.
Problem 1. It is to find out all the falling checkers related to the specified starting checker.
Problem 2. It is to determine the legality of a move.
Problem 3. It is to make the computer player have the ability to move spontaneously.
The solution to Problem 1 is described in Algorithm 8.

In Algorithm 8, denotes the searching direction. According to the game rules, a piece can be moved in six surrounding directions. So, the term searching direction is defined (as illustrated in Figure 16).

According to the three features, all checkers can be traversed recursively.
The solution to Problem 2 is described in Algorithm 9.

To solve Problem 3, we provide an algorithm based on the traditional alphabeta pruning method (see Algorithm 10).

In Algorithm 10, the evaluation function is formally defined as follows: where CU denotes the Checker Usage of the falling checker and FC_Incre denotes the number of increased legal falling checkers arising out of a move. Negative FC_Incre means that the number of falling checkers has decreased. Then, a simple evaluation function is given below:
Formula (A.6) is based on the following reasons: the longer the forward distance, the better the move; the more the generated falling checkers, the better the move; the higher the Checker Usage of the falling checker, the better the move.To reduce the error of the evaluation function and ensure its execution efficiency, an appropriate value of steps is given. During these steps, a computer player alternately enacts both sides’ players to move, and the evaluation values of every move are recorded. Then, the algorithm calculates the difference of the sum of evaluation values between its own side and the opponent’s and selects the move with the maximum difference. Through testing, we set steps to 11 and obtained effect preferably.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is supported by the Scientific Research Foundation of Hebei Education Department (Grant no. Z2012147), the Natural Science Foundation of Hebei Province (Grant no. F2014201100), and the Soft Science Research Program of Hebei Province (Grant no. 14450318D).
References
 A. El Rhalibi, K. W. Wong, and M. Price, “Artificial intelligence for computer games,” International Journal of Computer Games Technology, vol. 2009, Article ID 251652, 3 pages, 2009. View at: Publisher Site  Google Scholar
 C. E. Shannon, “Programming a computer for playing chess,” Philosophical Magazine, vol. 41, no. 7, pp. 256–275, 1950. View at: Google Scholar  MathSciNet
 A. M. Turing, “Computing machinery and intelligence,” Mind, vol. 59, no. 49, pp. 433–460, 1950. View at: Google Scholar  MathSciNet
 A. L. Samuel, “Some studies in machine learning using the game of checkers,” International Business Machines Corporation, vol. 3, pp. 211–229, 1959. View at: Google Scholar  MathSciNet
 A. L. Samuel, “Some studies in machine learning using the game of checkers II—recent progress,” IBM Journal of Research and Development, no. 11, pp. 601–617, 1967. View at: Google Scholar
 T. H. Kjeldsen, “John von Neumann's conception of the minimax theorem: a journey through different mathematical contexts,” Archive for History of Exact Sciences, vol. 56, no. 1, pp. 39–68, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 D. E. Knuth and R. W. Moore, “An analysis of alphabeta pruning,” vol. 6, no. 4, pp. 293–326, 1975. View at: Google Scholar  MathSciNet
 J. R. Slagle and R. C. T. Lee, “Application of game tree searching techniques to sequential pattern recognition,” Communications of the ACM, vol. 14, no. 2, pp. 103–110, 1971. View at: Publisher Site  Google Scholar
 J. Pearl, “Asymptotic properties of minimax trees and gamesearching procedures,” Artificial Intelligence, vol. 14, no. 2, pp. 113–138, 1980. View at: Publisher Site  Google Scholar  MathSciNet
 R. L. Rivest, “Game tree searching by min/max approximation,” Artificial Intelligence, vol. 34, no. 1, pp. 77–96, 1987. View at: Publisher Site  Google Scholar  MathSciNet
 L. W. Li and T. A. Marsland, “Probabilitybased game tree pruning,” Journal of Algorithms, vol. 11, no. 1, pp. 27–43, 1990. View at: Publisher Site  Google Scholar  MathSciNet
 M. Levene and T. I. Fenner, “The effect of mobility on minimaxing of game trees with random leaf values,” Artificial Intelligence, vol. 130, no. 1, pp. 1–26, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Björnsson and T. A. Marsland, “Learning extension parameters in gametree search,” Information Sciences, vol. 154, no. 34, pp. 95–118, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 Wikipedia, English Draughts, 2014, http://en.wikipedia.org/wiki/English_draughts.
 W. Saletan, “Chess Bump: The Triumphant Teamwork of Humans and Computers,” May 2007, http://www.slate.com/articles/health_and_science/human_nature/2007/05/chess_bump.html. View at: Google Scholar
 Wikipedia, Logistello, 2014, http://en.wikipedia.org/wiki/Logistello.
 B. N. Chen, P. Liu, S. C. Hsu, and T. S. Hsu, “Abstracting knowledge from annotated Chinesechess game records,” in Computers and Games, vol. 4630 of Lecture Notes in Computer Science, pp. 100–111, Springer, Berlin, Germany, 2007. View at: Publisher Site  Google Scholar
 E. C. D. van der Werf, M. H. M. Winands, H. J. van den Herik, and J. W. H. M. Uiterwijk, “Learning to predict life and death from Go game records,” Information Sciences, vol. 175, no. 4, pp. 258–272, 2005. View at: Publisher Site  Google Scholar
 Z.Q. Liu and Q. Dou, “Automatic pattern acquisition from game records in GO,” The Journal of China Universities of Posts and Telecommunications, vol. 14, no. 1, pp. 100–105, 2007. View at: Publisher Site  Google Scholar
 S. J. Yen, T. N. Yang, J. C. Chen, and S. C. Hsu, “Pattern matching in GO game records,” in Proceedings of the 2nd International Conference on Innovative Computing, Information and Control (ICICIC '07), p. 297, Kumamoto, Japan, September 2007. View at: Publisher Site  Google Scholar
 T. Esaki and T. Hashiyama, “Extracting human players' shogi game strategies from game records using growing SOM,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '08), pp. 2176–2181, IEEE, Hong Kong, June 2008. View at: Publisher Site  Google Scholar
 B. G. Weber and M. Mateas, “A data mining approach to strategy prediction,” in Proceedings of the IEEE Symposium on Computational Intelligence and Games (CIG '09), pp. 140–147, IEEE, Milano, Italy, September 2009. View at: Publisher Site  Google Scholar
 S. Takeuchi, T. Kaneko, and K. Yamaguchi, “Evaluation of game tree search methods by game records,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 2, no. 4, pp. 288–302, 2010. View at: Publisher Site  Google Scholar
 S. Wender, Data Mining and Machine Learning with Computer Game Logs, Project Report, University of Auckland, Auckland, New Zealand, 2007.
 Wikipedia, “Chinese checkers,” August 2014, http://en.wikipedia.org/wiki/Chinese_checkers. View at: Google Scholar
 N. R. Mabroukeh and C. I. Ezeife, “A taxonomy of sequential pattern mining algorithms,” ACM Computing Surveys, vol. 43, no. 1, article 3, 2010. View at: Publisher Site  Google Scholar
 R. Srikant and R. Agrawal, “Mining sequential patterns: generalizations and performance improvements,” in Advances in Database Technology—EDBT '96, vol. 1057 of Lecture Notes in Computer Science, pp. 1–17, Springer, Berlin, Germany, 1996. View at: Publisher Site  Google Scholar
 M. J. Zaki, “SPADE: an efficient algorithm for mining frequent sequences,” Machine Learning, vol. 42, no. 12, pp. 31–60, 2001. View at: Publisher Site  Google Scholar
 X. Yan, J. Han, and R. Afshar, “CloSpan: mining closed sequential patterns in large databases,” in Proceedings of the 3rd SIAM International Conference on Data Mining, pp. 166–173, SIAM, San Francisco, Calif, USA, 2003. View at: Google Scholar
 J. Pei, J. Han, B. MortazaviAsl et al., “Mining sequential patterns by patterngrowth: the prefixspan approach,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 11, pp. 1424–1440, 2004. View at: Publisher Site  Google Scholar
 M. Y. Lin and S. Y. Lee, “Fast discovery of sequential patterns by memory indexing,” in Data Warehousing and Knowledge Discovery, vol. 2454 of Lecture Notes in Computer Science, pp. 150–160, Springer, Berlin, Germany, 2002. View at: Publisher Site  Google Scholar
 R.F. Hu, L. Wang, X.Q. Mei, and Y. Luo, “Fault diagnosis based on sequential pattern mining,” Computer Integrated Manufacturing Systems, vol. 16, no. 7, pp. 1412–1418, 2010. View at: Google Scholar
 G. Yilmaz, B. Y. Badur, and S. Mardikyan, “Development of a constraint based sequential pattern mining tool,” The International Review on Computers and Software, vol. 6, no. 2, pp. 191–198, 2011. View at: Google Scholar
 S. Dharani, J. Rabi, N. Kumar, and Darly, “Fast algorithms for discovering sequential patterns in massive datasets,” Journal of Computer Science, vol. 7, no. 9, pp. 1325–1329, 2011. View at: Publisher Site  Google Scholar
 H.J. Shyur, C. Jou, and K. Chang, “A data mining approach to discovering reliable sequential patterns,” Journal of Systems and Software, vol. 86, no. 8, pp. 2196–2203, 2013. View at: Publisher Site  Google Scholar
 G.C. Lan, T.P. Hong, V. S. Tseng, and S.L. Wang, “Applying the maximum utility measure in high utility sequential pattern mining,” Expert Systems with Applications, vol. 41, no. 11, pp. 5071–5081, 2014. View at: Publisher Site  Google Scholar
 J. Han, J. Pei, and Y. Yin, “Mining frequent patterns without candidate generation,” ACM SIGMOD Record, vol. 29, no. 2, pp. 1–12, 2000. View at: Publisher Site  Google Scholar
 J. Edmonds, “Paths, trees, and flowers,” Canadian Journal of Mathematics, vol. 17, pp. 449–467, 1965. View at: Google Scholar  MathSciNet
Copyright
Copyright © 2015 Liang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.