About this Journal Submit a Manuscript Table of Contents
International Journal of Computer Games Technology
Volume 2008 (2008), Article ID 378485, 7 pages
http://dx.doi.org/10.1155/2008/378485
Research Article

Game Portability Using a Service-Oriented Approach

Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield S1 4DP, UK

Received 30 September 2007; Accepted 7 January 2008

Academic Editor:  Wong

Copyright © 2008 Ahmed BinSubaih and Steve Maddock. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Game assets are portable between games. The games themselves are, however, dependent on the game engine they were developed on. Middleware has attempted to address this by, for instance, separating out the AI from the core game engine. Our work takes this further by separating the game from the game engine, and making it portable between game engines. The game elements that we make portable are the game logic, the object model, and the game state, which represent the game's brain, and which we collectively refer to as the game factor, or G-factor. We achieve this using an architecture based around a service-oriented approach. We present an overview of this architecture and its use in developing games. The evaluation demonstrates that the architecture does not affect performance unduly, adds little development overhead, is scaleable, and supports modifiability.

1. Introduction

The shift in game development from developing games from scratch to using game engines was first introduced by Quake and marked the advent of the game-independent game engine development approach [1]. In this approach, the game engine became the collection of modules of simulation code that do not directly specify the game's behaviour (game logic) or game's environment (level data) [2]. The game engine is thus reusable for (or portable to) different game projects. However this shift produces a game that is dependent on the game engine. For example, why can't a player take his favourite game (say Unreal) and play it on Quake engine or Quake game on Unreal engine?

Hardware and software abstractions have facilitated the ability to play a game on different hardwares and on different operating systems. These abstractions have also facilitated the ability to use data assets such as 3D models, sound, music, and texture across different game engines. This ability should also be extended to allow for the game itself to be portable. The goal of our work is to make the game engine's brain portable, where the brain holds the game state and the object model and uses the game logic to control the game. We collectively refer to these three things as the G-factor.

We see the portability of the G-factor as the next logical step in the evolution of game development and, following Lewis and Jacobson's terminology [1], we call it the game-engines-independent game development approach. A benefit of making the G-factor portable would be to encourage more developers to make use of game engines, since a particular game engine's future capability (or potential discontinuation, as was the fate of Adobe Atmosphere which was used for Adolescent Therapy-Personal Investigator [3]) would not be a worry as a different game engine could easily be substituted. This problem has recently been referred to as the RenderWare Problem [4] after the acquisition of RenderWare engine by Electronic Arts (EA) and its removal from the market. We see the issue of rewriting the G-factor from scratch every time we migrate from one engine to another as similar to the undesired practice of developing games from scratch which was deemed unfeasible and resulted in the advent of game engines.

As we noted earlier, portability is an issue that pervades all games with regards to game assets. In addition, however, and related to our work, are the moves towards addressing more aspects of portability. Examples include artificial intelligence (AI) architectures and interfaces [5]. AI architectures use custom made or off-the-shelf components such as AI Middleware (e.g., SOAR [6] or AI.Implant (http://www.biographictech.com (accessed 5/5/2007)). However, specifying the game using the AI middleware format merely moves the game from one proprietary format (game engines) to another (AI middleware). The work on interfaces aims to facilitate access to game engines. For example, Gamebots [7] and GOLOG Bots [8] are the interfaces that have been used to access Unreal, with, similarly, Quakebot [9] for Quake, FlexBot [10] for Half-Life, and Shadow Door [11] for Neverwinter Nights. These provide interfaces for specific game engines. Other projects are attempting to provide common interfaces to game engines such as the initiative by International Game Developers Association (IGDA) for world interfacing [12] and OASIS [13]. Despite this work, such interfaces may have more success in the serious games community rather than the fast-evolving games industry.

In [14], we described, in detail, how to make the G-factor portable. In this paper, we give an overview of this earlier work, and instead focus more on the evaluation process, addressing issues such as performance, implementation overhead, scalability, and modifiability. We present results of conducting both an unstructured evaluation process and a structured evaluation using ATAM [15], and contrast the two in the subsequent discussion.

The remainder of this paper is structured as follows. Section 2 demonstrates the issues with the typical game development approach through the development of a sample game. This is then contrasted with the development of the same game using our approach, which enables the G-factor to be portable. Section 3 describes the evaluation process and what it revealed about the two development approaches. Finally Section 4 presents the conclusions.

2. An Architecture for G-Factor Portability

This section contrasts a typical game development approach with the game-development approach proposed in our work. Section 2.1 describes what is considered to be a typical development approach through the development of a sample game, and highlights the dependencies associated with this approach. Section 2.2 then proposes an approach to address these dependencies and describes an architecture called game space architecture (GSA) which has been implemented to validate this approach.

2.1. A Typical Approach to Game Development

We will use a game that we call Moody NPCs to illustrate the typical approach to game development. The game consists of a number of nonplayer characters (NPCs) that react to a player based on their mood. The player can carry out actions such as greeting or swearing. Each NPC reacts to the action based on his mood which is governed by two variables: cowardness/courage and forgiveness/punishment. The game allows the user to navigate the level and click on an NPC which reveals its current mood and the actions available. The player can adjust the mood variables and try out different actions. The Torque game engine is used to demonstrate how the game is developed.

The typical game development approach can be grouped into four main steps as shown in the typical approach column in Table 1. To create the game level data (step 1), Torque engine provides a level editor called World Editor. The level can also be created using other ways such as: scripting, API, and configuration files. The game level data contain the terrain of the environment and the decorative objects (e.g., houses, trees, etc.). The level also contains location markers for the game objects (e.g., NPCs and player). Scripting is used to create the other game objects (e.g., reaction, action, and interaction). This approach for creating the game level data is very common amongst game engines–-84% of engines we surveyed provided editors to create the game level [5].

tab1
Table 1: Comparing a typical game development approach to GSA’s approach

Figure 1 shows the graphical user interface created in step 2. This has mood variable sliders on the top left corner of the screen and an actions controller on the bottom left corner of the screen. The player can use the keyboard to navigate around and the mouse to select an NPC. We used Torque's GUI Editor to set the interface controllers, although it is also possible to use scripting and configuration files.

378485.fig.001
Figure 1: The Moody NPCs game.

Step 3 is to create the object model to hold the structure for the game objects. The object model consists of five classes: player, NPC, action, reaction, and interaction. Torque has a default object model for the player and the AI player. We extended these to add the properties that are specific to the game (i.e., mood variables for an NPC). We created the other classes using a static object model using TorqueScript. The other game object models are created using scripting. Finally, step 4 is to create the game logic which controls how the NPC reacts to the player actions.

2.2. GSA’s Approach

Figure 2 illustrates the software dependencies problem GSA is aiming to tackle. the example used is the development of Gears of War , which is dependent on Unreal Engine 3 and the underlying software [16]. This is similar to the dependency the Moody NPCs game suffers from, and also to the dependencies exhibited by the projects we surveyed in an earlier paper [5].

fig2
Figure 2: GSA overview. The numbers highlighting the communication between the game space and the game engine are described in Figure 3.

GSA’s objective is to reduce the dependencies by adopting a service-oriented design philosophy, which enables the G-factor to exist independently of the game engine. The service-oriented approach has proved its practicality for achieving different types of portability such as platforms and languages [17]. The novel design approach employed in GSA combines a variant of the model-view-controller (MVC) pattern to separate the G-factor (i.e., model) from the game engine (i.e., view) with on-the-fly scripting to enable communication through an adapter (i.e., controller). The use of a variant of MVC rather than the normal MVC avoids a known liability where the view is tightly coupled to the model [18]. The use of on-the-fly scripting is used to maintain the attractive attributes associated with typical game development where data-driven mechanisms are used to modify the G-factor. Most notably, modifiability is upheld in the typical game development approach using scripting, which our surveys found to be very popular with game engines and projects that use game engines [5]. To maintain this level of modifiability (i.e., scripting level access) to the game engine and the game space, GSA uses on-the-fly scripting to communicate with both via the adapter. For example, a communication may begin with the game engine sending the updates to the adapter (step 1 in the communication protocol shown in Figure 3). The adapter converts them into scripts or direct API calls (step 2) which are then used to update the game space (step 3). When the game space needs to communicate with the game engine, it notifies the adapter of the changes that need to be communicated (step 4). The adapter formats these into the engine’s scripting language (step 5) and sends them to the engine to be executed (step 6). The separation and the communication mechanisms allow the G-factor to exist independently of the game engine. The effect this has on portability is that when migrating to a new engine, the elements in the game space (i.e., the game state, object model, and game logic) can stay intact. Contrasting this to migrating a game developed using the typical game development approach, which often require all three elements to be created again, shows the extent of the effort saved.

378485.fig.003
Figure 3: Communication between the game engine and the game space.

As was shown in Table 1, the first difference between our approach and the typical game development approach is the creation of the game objects, which is split over the game engine and the game space due to the two types of game objects. The first type is the game objects that have to have representations inside the game engine to provide visual representations, such as the player and the NPCs needed for the Moody NPCs game. These require real-time processing in the game engine, and it is impractical to communicate every frame from the game space to the game engine. Therefore, these objects have to be created in the game engine as well as the game space and only updates are communicated. The second type of game objects is the ones that do not have representations inside the game engine, such as the action, interaction, and reaction objects. These objects can be created in the game space only. The object model creation is similarly split over the game engine and the game space. The second difference is creating the game logic in the game space rather than the game engine. The third difference is creating the adapter which handles the communication between the game space and the game engine.

3. Evaluation and Discussion

A software architecture can be evaluated using structured or unstructured methods. An unstructured evaluation, which is a common way to evaluate a software architecture [19], consists of randomly throwing challenges at the architecture and hoping that either the architecture can address them, or that they will reveal its limitations. In structured evaluation, methods such as ATAM [15], SAAM [20], ARID [21], and ABAS, PASA and CBAM [22] are used to probe the architecture with the aim of exercising the whole architecture. We used ATAM in our structured evaluation, a method that is not limited to a particular stage of the development cycle, and which involves stakeholders (i.e., user, maintainer, developer, manager, tester, architect, security expert, etc.) in specifying the architecture attributes to address. In the following paragraphs, we will summarise the findings of detailed structured [23] and unstructured [24] evaluations carried out in our earlier research papers. We will focus on four attributes: portability, performance, modifiability, and scalability. Following this, we contrast the structured and unstructured approaches to evaluation.

3.1. Portability

The unstructured evaluation found that GSA managed to address the portability challenge by servicing the same G-factor to two different engines [13]: a bespoke engine developed on top of DirectX 9.0 and the Torque game engine (see Figure 4). This was done without modifying the G-factor and was constrained to modifying the adapter. Similarly, the structured evaluation found GSA supports portability. It found that the separation using the MVC pattern allows for better portability since it allows for multiple views (i.e., game engines) for the same model (i.e., G-factor). In addition, the structured evaluation found that portability could be undermined if the game engine does not fully expose the required functionality through scripting since the adapter relies on scripting for communicating back to the game engine (see Figure 2).

fig4
Figure 4: (a) Smart terrain running on bespoke engine, (b) the same G-factor running on Torque [24].
3.2. Performance

The aim here was to find the average reduction in frames-per-second (fps) due to the use of GSA. To get a performance indicator, a player was simulated to be running continuously around a path for 30 minutes (see Figure 5). Using this simulation, two performance tests were run to contrast the overheads of a game developed with the typical development approach to one developed using GSA. The performance overheads measured were: fps, CPU, memory, and network (for the test using the game space). The average reduction in fps was 11.69% when following the GSA approach. This average fps reduction is relatively large for a small game and more tests need to be performed to get a better indication of how this reduction will scale with the game size. However, when comparing this finding to the findings from the scalability challenge (described later), we find that GSA does not affect performance unduly. The structured evaluation revealed two issues. It found that the data integrity across the different game states (i.e., game engine and game space) was at risk. This is due to the delays that might occur because of the separation as a result of the use of the MVC pattern which add an overhead for exchanging information. Initial tests revealed no problems, but further tests are required before this can be established with certainty. In addition, there is a danger if the message load increases that the game space becomes the bottleneck in the architecture.

378485.fig.005
Figure 5: First-person shooter game [24].
3.3. Modifiability

Here, the success of GSA was judged by the ability to create different G-factors on the same architecture using a different object model and game logic. The fact that different G-factors (Figures 1, 4, 5, and 6) can be developed using GSA showed its modifiability. In addition, a structured evaluation process measured the modifiability across the different parts of GSA by examining how each architectural decision affects modifiability, and how it trades against the other quality attributes (e.g., portability and performance) [23]. The evaluation revealed that if a single unique identifier cannot be set for game objects on the game space and game engine, then GSA becomes very sensitive to any modification as it has to be added manually in the adapter. Furthermore, using on-the-fly scripting allows for better modifiability but runs slower than precompiled code. Modifiability is also enhanced by the use of a variant of the MVC pattern that reduces the dependencies between the model and the view.

378485.fig.006
Figure 6: A serious game for traffic accident investigators [25].
3.4. Scalability

The aim was to identify how much overhead is added as the game size grows. This was examined by developing a serious game for traffic accident investigators [25] (see Figure 6). The adapter’s implementation overhead for each challenge is presented in Table 2. Using the implementation overheads of the adapters compared to their game logic sizes in each of the test games developed, we can forecast that for small game size, the overhead is large, but that it stabilises at around 6% for code of size between 100,000 and 500,000 lines (see Figure 7) (http://support.microsoft.com/kb/828236 (accessed 24/8/2007)). The scalability challenge also showed that the performance overhead was not noticeable when judging its success in training [25] for which smooth play is crucial to avoid frustrating the users. The structured evaluation found that using a dynamic object model allows for better game-model scalability, but it makes the architecture very sensitive to change as the change propagates to the game logic and to the adapter.

tab2
Table 2: The implementation overhead for the adapter.
378485.fig.007
Figure 7: The adapter code forecast compared to the game logic code size.
3.5. Structured versus Unstructured Evaluation

The unstructured evaluation revealed how well the architecture can cope with the challenges. However, there was no easy way to establish the correlation between the challenge and what architectural decisions had supported or undermined. Furthermore the unsystematic way of generating scenarios (i.e., challenges) meant that some time was unnecessarily spent in implementing different tests when one could have served all the challenges (e.g., the implementation of the serious game (see Figure 6) used in the scalability challenge could have been used to test all of the challenges). This could be attributed to the incomplete overall evaluation picture due to the lack of systematic guidance. Although, there is no guarantee that a structured evaluation would not produce redundant probing since, just like the unstructured evaluation, it is also scenario-based. However, the chances are reduced due to the fact that the generation of scenarios is guided by using a utility tree (The utility tree elicits the quality attributes down to the scenario level to provide a mechanism for translating architectural requirements into concrete practical scenarios) in which all the scenarios are identified. This serves two purposes. The first purpose is that once all the scenarios are present, the experimentation can begin by choosing a test where preferably all these scenarios can be addressed. The second purpose is that it describes the decisions that are going to be analyzed by the scenario which means that any repetitive probing can be identified.

The problem with scenario-based evaluation which both unstructured and structured evaluations use is that the evaluation is only as good as the scenarios generated, which in turn depends on the stakeholders in the evaluation team. Although there are measures put in place to ensure that the selection includes all the important personnel (i.e., architects and domain experts), the fundamental problem still persists.

Contrasting ATAM’s output to the unstructured evaluation results, which quite often answer the challenge with yes or no, or with some metrics such as network load or fps, highlights the strengths of ATAM. ATAM classifies the decisions according to how they affect the architecture (i.e., support or undermine it). We found the ATAM process helpful in understanding our architecture better. Of further benefit is that it should also act as a guide when there is a need to modify or evolve GSA. This guidance is based on the fact that it reveals the strengths and weaknesses of the architectural decisions. In future, we recommend using ATAM alongside the development cycle. This is where ATAM is designed to be most effective by revealing issues at different stages of the development cycle when they are cheaper to address. Had we started with ATAM, we believe it would have saved us time and effort by avoiding the creation of a number of redundant challenges.

4. Conclusions

We have presented an architecture for making games (i.e., G-factors) portable between game engines. The changes required to the typical game development approach have been demonstrated through the development of a sample game called Moody NPCs. In addition, the work has presented the findings from two types of evaluation. The findings have revealed that GSA is capable of making the G-factor portable, but GSA adds performance and implementation overheads. Despite these overheads, GSA has been shown to scale to real world applications [25]. Modifiability has been found to be sensitive in cases where a unique identifier cannot be set for game objects.

Whilst the unstructured evaluation managed to reveal issues with the architecture, the mechanism of throwing random challenges resulted in redundant challenges and failed to articulate which architectural decisions undermined or supported GSA. Using ATAM guided the evaluation better. Employed earlier, it could have helped to avoid the redundancy in the unstructured evaluation. Also, it was capable of revealing how the architectural decisions interact in order to support the required attributes. Although the portability presented in this work has only been shown across two engines, the approach followed to achieve that is consistent in the way the two engines were linked via the adapter, and, therefore, there is no reason why it cannot be followed to link other engines.

With gameplay predicted to be the distinguishing factor between future games [26] and combined with the increased number of commercial licensees of game engines and the increased interest from the serious games community, this will increase the need for portable games for two reasons. The first reason is because developers can keep the visual aspects of their game up-to-date with the latest game engine. The second reason is the security from having to face the RenderWare Problem [4]. However, the incentive for game engine developers is less clear.

References

  1. M. Lewis and J. Jacobson, “Games engines in scientific research,” Communications of the ACM, vol. 45, no. 1, pp. 27–31, 2002.
  2. J. Wang, M. Lewis, and J. Gennari, “Emerging areas: urban operations and UCAVs: a game engine based simulation of the NIST urban search and rescue arenas,” in Proceedings of the 35th Winter Simulation Conference, pp. 1039–1045, New Orleans, La, USA, December 2003.
  3. D. Coyle and M. Matthews, “Personal investigator: a therapeutic 3D game for teenagers,” in Proceedings of the Conference on Human Factors in Computing Systems (CHI '04), Vienna, Austria, April 2004.
  4. S. Carless, “Rise of the game engine,” Game Developer, pp. 2–2, 2007.
  5. A. BinSubaih, S. Maddock, and D. Romano, “A survey of ‘game’ portability,” Department of Computer Science, University of Sheffield, Sheffield, UK, 2007.
  6. J. E. Laird, M. Assanie, B. Bachelor, et al., “A testbed for developing intelligent synthetic characters,” in Proceedings of the AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment, pp. 52–56, Palo Alto, Calif, USA, March 2002.
  7. R. Adobbati, A. N. Marshall, A. Scholer, et al., “Gamebots: a 3D virtual world test-bed for multi-agent research,” in Proceedings of the 2nd International Workshop on Infrastructure for Agents, MAS and MAS Scalability, Montreal, Quebec, Canada, May 2001.
  8. S. Jacobs, A. Ferrein, and G. Lakemeyer, “Unreal Golog bots,” in Proceedings of Workshop on Reasoning, Representation, and Learning in Computer Games (IJCAI '05), Edinburgh, Scotland, UK, July-August 2005.
  9. J. E. Laird, “It knows what you're going to do: adding anticipation to a Quakebot,” in Proceedings of the 5th International Conference on Autonomous Agents, pp. 385–392, Montreal, Quebec, Canada, May-June 2001. View at Publisher · View at Google Scholar
  10. A. Khoo, G. Dunham, N. Trienens, and S. Sood, “Efficient, realistic NPC control systems using behavior-based techniques,” in Proceedings of the AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment, Palo Alto, Calif, USA, March 2002.
  11. T. S. Hussain and G. Vidaver, “Flexible and purposeful NPC behaviors using real-time genetic control,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 785–792, Vancouver, BC, Canada, July 2006. View at Publisher · View at Google Scholar
  12. A. Nareyek, N. Combs, B. Karlsson, S. Mesdaghi, and I. Wilson, “The report of the IGDA's artificial intelligence interface standards committee,” 2005, International Game Developers Association http://www.igda.org/ai/report-2005/report-2005.html.
  13. C. Berndt, I. Watson, and H. Guesgen, “OASIS: an open AI standard interface specification to support reasoning, representation and learning in computer games,” in Proceedings of Workshop on Reasoning, Representation, and Learning in Computer Games (IJCAI '05), pp. 19–24, Edinburgh, Scotland, UK, July-August 2005.
  14. A. BinSubaih and S. Maddock, “G-factor portability in game development using game engines,” in Proceedings of the 3rd International Conference on Games Research and Development, pp. 163–170, Manchester, UK, September 2007.
  15. P. Clements, R. Kazman, and M. Klein, Evaluating Software Architectures: Methods and Case Studies, Addison-Wesley, Reading, Mass, USA, 2001.
  16. T. Sweeney, “The next mainstream programming language: a game developer's perspective,” in Proceedings of the 33rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, p. 269, Charleston, SC, USA, January 2006. View at Publisher · View at Google Scholar
  17. T. Erl, Service-Oriented Architecture: Concepts, Technology, and Design, Prentice-Hall, Englewood-Cliffs, NJ, USA, 2005.
  18. F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal, Pattern-Oriented Software Architecture: A System of Patterns, vol. 1, John Wiley & Sons, New York, NY, USA, 1996.
  19. R. Bahsoon and W. Emmerich, “Architectural stability and middleware: an architecture centric evolution perspective,” in Proceedings of the 2nd International ECOOP Workshop on Architecture-Centric Evolution (ACE '06), Nantes, France, July 2006.
  20. R. Kazman, M. Klein, M. Barbacci, T. Longstaff, H. Lipson, and J. Carriere, “The architecture tradeoff analysis method,” in Proceedings of the 4th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS '98), pp. 68–78, Monterey, Calif, USA, August 1998. View at Publisher · View at Google Scholar
  21. P. Clements, “Active reviews for intermediate designs,” Software Engineering Institute, Pittsburgh, Pa, USA, 2000.
  22. R. Bahsoon and W. Emmerich, “Evaluating software architectures: development, stability and evolution,” in Proceedings of the ACS/IEEE International Conference on Computer Systems and Applications, p. 47, Tunis, Tunisia, July 2003. View at Publisher · View at Google Scholar
  23. A. BinSubaih and S. Maddock, “Using ATAM to evaluate a game-based architecture,” in Proceedings of the 2nd International ECOOP Workshop on Architecture-Centric Evolution (ACE '06), Nantes, France, July 2006.
  24. A. BinSubaih, S. Maddock, and D. Romano, “Game logic portability,” in Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE '05), pp. 458–461, Valencia, Spain, June 2005. View at Publisher · View at Google Scholar
  25. A. BinSubaih, S. Maddock, and D. Romano, “A serious game for traffic accident investigators,” International Journal of Interactive Technology and Smart Education, vol. 3, no. 4, pp. 329–346, 2006.
  26. E. Dounis, “The great debate: gameplay vs. graphics,” 2006, http://www.gamersmark.com/articles/205/.