Abstract

We present a regression test selection technique for C# programs. C# is fairly new and is often used within the Microsoft .Net framework to give programmers a solid base to develop a variety of applications. Regression testing is done after modifying a program. Regression test selection refers to selecting a suitable subset of test cases from the original test suite in order to be rerun. It aims to provide confidence that the modifications are correct and did not affect other unmodified parts of the program. The regression test selection technique presented in this paper accounts for C#.Net specific features. Our technique is based on three phases; the first phase builds an Affected Class Diagram consisting of classes that are affected by the change in the source code. The second phase builds a C# Interclass Graph (CIG) from the affected class diagram based on C# specific features. In this phase, we reduce the number of selected test cases. The third phase involves further reduction and a new metric for assigning weights to test cases for prioritizing the selected test cases. We have empirically validated the proposed technique by using case studies. The empirical results show the usefulness of the proposed regression testing technique for C#.Net programs.

1. Introduction

Testing software is an important part of the production life cycle of a program. Testing is an expensive activity. Hence, appropriate testing methods are necessary for ensuring the reliability of a program. Regression testing aims to provide confidence in the correctness of a program after its modification. During the initial development of the program, a set of test cases is saved and a table of test case-method coverage information can be determined. After a program is modified, regression test selection requires a subset of test cases, , to be selected from for rerunning on the modified program with the objective of providing confidence that no unintended effects have been caused by the modification.

It would be costly for regression testing to repeat the whole set of test cases used in the initial development of the program and unreliable to choose a random subset of test cases from . Therefore, it is necessary to select a suitable subset of test cases from to run. Regression test selection reduces the cost of testing by focusing on the changes that take place in the program.

A number of regression test selection approaches have been developed. Several approaches and techniques have addressed the problem of regression testing for procedural programs. Examples of procedural-based techniques are: slicing, data flow, firewall, and optimization [14]. Others have dealt with object oriented (OO) programs. Some of these OO techniques for regression test selection are based on UML diagrams and make use of only design information. Examples of these techniques are presented in [59]. Also, object oriented firewall techniques have been proposed in [10]. Extended firewalls are proposed for Object Oriented Regression Testing in [11]. Their algorithm makes a distinction between affected components and checked components. Then, it detects all the components that directly or indirectly call the modified component. These are called faults. The authors find that the extended firewall revealed more faults than normal firewall techniques. The object oriented code-based techniques have been based on Java and C++ to handle the interprocedural control flow and the features of these languages. To handle the Java constructs and features, a regression test selection algorithm has been developed which builds a Java interclass graph as an extension of the control flow graph [12]. Rothermel et al. [13] have addressed the regression test selection technique problem for C++ software which is a code-based technique that builds an interclass control flow graph in order to find the difference between the original and the new programs. Jang et al. [14] presented another regression testing technique for C++ software. The authors mainly focus on functions that should be retested in C++ programs. They identify the type of change, and a firewall of its type is determined in order to find the dependency between the statements. A unit firewall is a set of member functions which require unit level retesting; an integration firewall is a set of interactions between member functions which require integration level retesting. Recently, Li and Harrold [15] presented a method for random test selection based on the Chernoff bound. Chittimalli and Harrold [16] described a regression test selection technique based on system requirements. Also, Qu et al. [17] proposed configuration-aware regression testing technique for evolving software systems using combinatorial interaction testing, configuration sampling, and prioritization.

C# is a fairly new object oriented programming language which is integrated in the Microsoft’s .NET framework for application development. However, no work has been reported on regression testing C# applications with their specific features. The objective of this paper is to present, for the first time, a code-based regression test selection technique for C# programs. This technique is a three-phase technique. In the first phase, we build an Affected Class diagram covering the classes that are changed or affected by such changes in the source code. In the second phase, we develop a C# interclass graph, which represents the control flow of the methods in the classes considered in the affected class diagram and their interrelationships. This phase extends previous object oriented techniques by covering C# specific features. In the third phase, we assign weights to test cases using a proposed metric. According to the weights, we reorder the test cases that need to be rerun. We show the usefulness of our technique by presenting the results of applying it to a few examples.

This paper is organized as follows. Section 2 presents some assumptions and describes required data structures. Sections 35 describe our technique for selecting test cases from the initial suite of tests. Section 6 presents and discusses our empirical results. Section 7 proposes further work. Section 8 concludes the paper.

2. Assumptions and Notation

We assume that we have the test suite, , consisting of the test cases determined during the initial development of the program. Each test case in covers one or more methods in the classes. We use an adjacency table to represent the test case-method coverage information. After modifying the program, we also save the set of directly changed methods, , and the set of deleted methods, .

The set of affected methods are the methods that are directly or indirectly calling the changed, added, or deleted methods. These methods are gathered from the trace file produced by the .Net framework. The main purpose of the trace file is for testing and optimization after an application is compiled and released. We instrument the trace file by writing information about each method that calls another method (e.g., . calls ., where and are classes; and are methods).

Table 1 lists the notation used in the rest of the paper.

3. Technique Overview

Our technique for test case selection consists of three phases.

Phase-1
Test case selection based on the affected class diagram (ACD).Step 1. Generate the ACD that contains: all the classes that contain modified and/or deleted methods; the base classes and derived classes of the changed classes; the classes that use the changed classes (containing methods that call directly or indirectly the modified methods). In addition to classes, the ACD may also contain the following: Interfaces; Web and windows services, which is the use of an external method or class outside the program on a server or on the internet; COM+ components which are a serviced component registered by .Net and deployed on a server so that many applications can use it. The advantage of using COM+ components is its reusability.Step 2. This step involves selecting based on the ACD. We use the test-method coverage table and the test suite for finding that cover ACD.

Phase-2
Test case selection based on the C# Interclass Graph (CIG).Step 3. Generate the CIG which is a control flow graph that involves all the affected methods of the classes in ACD. The graph takes as input the set of classes in ACD and builds the control flow graphs of their affected methods.Step 4. This step involves selecting a set based on the CIG. Given the CIG graph of the old and the modified programs, we traverse the same paths in the two CIG Graphs by traversing the edges until we reach a difference in the target nodes. This edge will be marked as affected or potentially affected edge and will be saved. Any test case that covers this edge will be selected for rerunning.
We traverse all the paths of the two graphs until we find all the set of affected or potentially affected edges and select the test cases that cover them.

Phase-3
Further reduction and prioritization.Step 5. This step involves reducing the number of test cases selected from the first or second phase. In the first phase, for each affected method, we randomly select one test case covering this method. In the second phase, we randomly select one test case covering each affected or potentially affected edge.Step 6. Prioritize test cases in or based on the tester’s choice whether to go for the two phases or stop at the first phase.

The six steps are explained in detail in the following sections.

4. Test Selection Based on ACD

We build an Affected Class Diagram that includes classes that contain modified or deleted methods. Then, the supertypes of these classes are determined and added to the Affected Class Diagram. We continue finding the derived classes for all the classes in ACD. But, we select only classes that contain a method overriding a modified/deleted method. The edge notations used in ACD are the following.(i)Inheritance edge: is a supertype edge going from the derived class to the base class.(ii)Use edge: is used when a class contains a method that explicitly calls another class.(iii)Indirect subtype edge: is used when a class contains an overridden method and a class contains a method overriding that method. Their will be an indirect subtype edge between the two classes.

Algorithm 1 presents the algorithm used for building an Affected Class Diagram. The ACD includes the classes that need to be, next, expanded in the C# interclass graph.

Input: Set of methods inside the classes;
Set of classes ( ) of the whole program;
AC is an affected class that contains a changed/deleted method;
CD is the class diagram for the whole program;
ACD contains classes that have changed/deleted methods;
Output: set of classes that need to be considered for regression testing;
an Affected Class Diagram (ACD) with the relationship between the classes.
Description: this algorithm is used to select a subset from the whole classes of the
program that are affected by the changes made to the program.
Begin
 For each (AC) do
   Find Base class ( ) for AC from CD // where is the super type of AC
   If ( ) not in ACD
     
      Add ( ) to ACD
      Add an inheritance edge between AC and them
     
- - up to this stage we have a set of changed classes with their base classes in ACD
with the relationship between them.
 For each class AC do
    Get methods of AC
    if AC contains an overridden method
  (for each class ( ) in CD that contains a method that overrides ) do
     
       if ( ) not in ACD
      
        Add ( ) to the ACD Diagram
        Add indirect subtype edge between ( ) and (AC)
      
     
End
- - - Now to find the classes that explicitly reference the changed classes directly or
indirectly use the Trace file.
 Begin
  Open Trace file
   While not EOF (Trace) do
    Read Trace statement
     If ( . calls . ) && ( ) && ( )
      
- -comment: A belongs to and in ACD and the method in belongs to .
       If ( ) not in ACD
        
         Add to ACD
         Add a use edge between and
        
        
      
    End While
End

Test case selection in this phase involves selecting the set of test cases that cover affected methods of the classes in ACD. Algorithm 2 shows the algorithm for selecting .

Input: Affected Class Diagram (ACD);
which is a test-method coverage table for the methods and their corresponding test cases;
: set of changed/deleted, and methods explicitly calling them.
Output: Set of Test cases that cover from the original test suite
Description: This algorithm selects the test cases that cover the affected methods by
modification in the program.
Begin
 For each Class.method ( )
  
   If Class ACD && method
    
     For each test case that covers
     
    
  
Return
End

5. Test Selection Based on CIG

5.1. Building the C# Interclass Graph

The C# interclass graph (CIG) is based on the Affected Class Diagram. This graph represents the control flow between the different methods of the classes in the ACD. In order to select the test cases that need to be rerun, we have to build two CIG graphs for the original and modified programs. We begin traversing the same edges of the two programs until we detect a different target node of the same edge. We mark this edge as affected or potentially affected and add it to the set of affected or potentially affected edges. After traversing the whole CIG graph, we obtain a set of edges marked as affected or potentially affected. From these edges, we select the test cases that cover them. In the following subsections, we describe the CIG.

5.1.1. Edge Notation for the CIG Graph

The C# interclass graph uses a set of edges to identify the type of method calls and the control flow between statements inside the method. A control flow edge represents the flow of consecutive statements inside a method. The call edge represents an explicit call to a method. The external path edge is an edge that identifies a call to an external component such as a class library or a COM+ component. This edge denotes the existence of an external component that the application uses to perform certain functionality. The path edge represents or summarizes the paths through the method called.

Each edge in the C# Interclass Graph has a label with the type of receiver instance that causes the method to be bound to the call. The edge notation for the CIG graph is represented as follows:

Control Flow edge bold-arrow

(Method) Call edge dotted-arrow

External Path Edge dashed-arrow

Path edge normal-arrow

5.1.2. Nodes Represented in the CIG Graph

Class library : is a set of compiled classes you use inside your program. They are not expanded because it is a reusable component. The C# program instantiates an object of that class and uses its methods. We represent the call to a Class library by a dotted circle node and an exit node. There is a path edge between them which represents the path inside the class library. We encode the name of the class library, the class and method called in the class library node. If we changed the called method inside the class library, all nodes calling this class library will certainly be detected. The notation used is represented in Figure 1.

Calling a Web Service
We call these services to do a certain calculation and return a value that is of use to the C# program. These are classes on the internet that do specific functionalities. We represent the call to a web service by a call and a return node. We encode the web service name and the method call inside the node. This allows us to detect the classes that call the web service. The notation is represented in Figure 2.

Delegates : are like old function pointers, they refer to some methods based on the type of objects it instantiates. Delegates can be polymorphic which allows the delegate to be bound to many methods. The notation is represented in Figure 3.

Pointers : are represented in Figure 4.

COM Components
They are not changed; they are treated as black boxes. The COM+ components are registered class libraries found on the server and can be called from the C# program. Also the component might be on the internet. We encode the name of the component, the class, and the method called. The notation is represented in Figure 5.

Call to Read from XML File
There might be interaction between C# programs and XML files. The call to read from an XML file is treated the same as a call to an internal method. We encode the name of the XML file and the path of this file inside the node. This allows us to detect the classes that call XML files. The notation is represented in Figure 6.

Call to Execute a Stored Procedure
C# programs interact with stored procedure by sending the name of the stored procedure to the SQL server inorder to execute it. These procedures run certain queries on the database and return results to the program. We represent calls to stored procedures by a call and a return node as any call to an internal method. The node will contain information about the name and location of the stored procedure. The notation is represented in Figure 7.

Events
Events in C# are handled like a call to a method. Each event is handled by an event handler. You raise events by supplying the address of the handler; a method that will receive the event. A call to a handler is similar to a call to any internal method in the program. We encode the name of the method called inside the node. Each call to a handler is represented by a call and a return node. The notation is represented in Figure 8.

Events with Multiple Handlers
Events might be handled by more than one handler; all methods handling that event are executed. The node of the event raised will be associated with all the handlers, if event is associated in code with handlers , , and . When the event is raised, the order of handler execution will be , , and . The notation is represented in Figure 9.

Exception Handling
When a C# application encounters a run-time error, it raises an exception and the application terminates. C# allows structured exception handling, which is a way for your application to recover from unexpected error or at least to have a chance to save data before closing. C#.Net contains built-in exception handlers to provide us with information about the exception. Also, the programmer can create catch blocks to catch the exception raised. Exceptions are treated like any other statements in the program.

5.1.3. Assumptions for the CIG Graph

(i) A statement change is a modification. Modification or deletion of an executable statement in a method is also a change. This is easily handled by selecting test cases that traverse them.

(ii) Modification of a variable or a constant in C# can be handled easily because these constants are put in an enumeration method and each method calling this enumeration will be affected.

(iii) Use a globally qualified name for each class. The globally qualified name lists all the hierarchy of such a class. Write the hierarchy information at each point of instantiation.

(iv) Each call to an external method (Class Library, COM+ component, Stored procedures, XML file, Web service) is represented as a pair of call node and a return node. They are connected with an edge called a path edge.

(v) Each call to an internal method is represented as a pair of call and return node.

5.2. Test Case Selection

Algorithm 3 presents the algorithm of finding the affected or potentially affected edges as a result of comparing the initial CIG graph with the modified one. The algorithm Compare is invoked at the entry node of each method that belongs to the classes of ACD. It accepts a node as input from the CIG and finds the affected or potentially affected edges; edges that cover or reach modified code. Algorithm 4 presents the algorithm for determining the set of test cases that cover the affected or potentially affected edges.

Algorithm: Compare
Input: : the entry node of the CIG of the original program
   : the entry node of the CIG of the modified program
Output: ED, which is the set of affected or potentially affected edges
Begin
 Mark as visited
 For each edge leaving do
  
  If
  
  
  IF then
   ED =
  Else
   If is not marked as visited
    Compare
   End if
 For each edge leaving and not found in do
  
 End For
End compare

Input: Set ED of affected or potentially affected edges traversed in the CIG graph
Set of test cases that covers the classes in ACD
Output: set of test cases selected from .
Description: this algorithm is responsible for finding the test cases from that traverse
those affected or potentially affected edges. Selection is based on the test cases selected from
the affected class diagram.
do
  
   For each
    
     If covers  // if the test case traverses the modified edge
      =
    
  
Return

5.3. Step  5—Test Cases Reduction

The first phase of the regression test selection technique selects that cover modified methods. In the first phase, is based on searching for the deleted/modified and affected methods and on selecting all the test cases that cover them. In the second phase, () is based on selecting all the test cases that cover the affected or potentially affected edges. Instead of determining (or ) based on selecting all test cases traversing an affected method (or affected/potentially affected edge), we can reduce the size of or by randomly choosing one of these test cases. We refer to the resulting set of tests as , being a subset of or of . This reduction may be justified in the case of large or for the purpose of reducing regression testing time.

6. Empirical Results

6.1. Empirical Procedure

The Regression test selection technique is applied to three applications: an Archive program, a Purchase order application for selling inventory, and a Task Management application that handles the internal tasks of a software company. For each application, we use different versions that lead to a total of six subject programs. Table 2 presents the characteristics of these subject programs. Initial information and data structures are generated for each version including the original test suite , a test-method coverage table (), and a trace file. Each program undergoes logical change. Then, the three phase technique is applied to each subject program. We assess the regression test selection technique according to the following three criteria.(i)The number of test cases selected by the technique denoted by or .(ii)Inclusiveness determines the degree by which the technique selects modification revealing test cases [18]. Modification revealing tests are those that produce different output than the original program. mr is the number of modification revealing tests. When a regression test selection method selects some of these tests (denoted by smr), inclusiveness is given by 100.(iii)Precision determines how the technique avoids choosing test cases that would not produce different output than the original program [18]. The non-modification revealing tests are denoted by nmr. The test cases that are non-modification revealing are omitted (denoted by onmr). Precision is given by 100.

6.2. Results and Discussion

Table 3 presents a summary of the results for the six subject programs. is the total number of test cases (in the initial suite); is the number (and percentage) of selected test cases after the first phase; is the number (and percentage) of selected test cases after the second phase. Tables 4 and 5 illustrate the precision and inclusiveness for the six examples after applying the first and second phases, respectively. Table 6 presents , the number of tests selected after applying the further reduction step (Step 5) on .

The number of selected test cases is considered relatively small (Table 3) ranging from 4.3% to 45% for (after applying the first phase) and 1.5% to 35% for (after applying the second phase). This shows that our technique does lead to a reduction in the number of test cases to be rerun after program modification.

Tables 4 and 5 depict the precision and inclusiveness outcome for the first two phases of the technique. The inclusiveness of is 100%, which implies that all modification revealing tests are selected. The precision of is fairly high and ranges between 71% and 96%, which implies that a limited number of nonmodification revealing tests are selected. Table 5 shows that the inclusiveness of is still 100%. That is, Phase 2 of our technique is also safe in selecting all modification revealing tests. The precision of is close to 100%, which means that hardly any nonmodification revealing tests are selected. We also note that the precision after Phase 2 (of ) is higher than that of Phase 1 (of ) for the same inclusiveness level.

Table 6 presents the results of the further reduction step (Step 5 in Phase 3), which show that the inclusiveness of has dropped (to lower than 26%) in comparison with 100% for and . This drop occurs due to the omission of all redundant and multiple tests that cover the same elements. Hence, Step 5 of our technique does not provide safe coverage. However, the precision rises to 100% and all nonmodification revealing tests are omitted. We propose using this step as well as the prioritization step of Phase 3 only where or is large and we have very limited time for regression testing; this would apply in emergency situations where more regression testing time is too expensive vis-à-vis the suspended services of a system (waiting for the completion of regression testing). Also, the set of test cases chosen in the first phase will cover the affected or potentially affected edges in the ACD. Thus, building CIG graph, in the second phase, will result in selecting a set to be rerun. Testers might not have time to go for two phases. They can stop at the Affected Class Diagram level and select and run all the test cases . This depends on the following condition: where is the time to build the graph , is the time to select subset of tests , and is the time to run the subsets of tests .

Our approach has a limitation when it is applied to large programs. The complexity of the algorithms is quadratic in the number of CIG nodes. For large CIGs, our technique can be costly. Hence, we need to explore ways to reduce the cost and still maintain effective regression testing.

7. Further Work—Prioritizing Test Cases

Test case prioritization entails running test cases with the highest priority earlier. Prioritization techniques can provide earlier detection of faults in the modified program. Test cases can be prioritized according to different metrics. They can be prioritized according to the number of covered methods or statements. However, prioritizing test cases according to the number of statements might end up running, first, test cases that cover large number of statements, but small quantity of modified code. A similar argument applies for methods.

Since our test selection techniques are based on safe coverage of all affected or potentially affected edges, they might lead to selecting a large number of test cases. Hence, prioritizing the selected test cases may be useful in further reduction of test cases. We propose a new prioritization technique that employs the number of modified methods. The weight of each test case relies on how much this test case cover modified or affected methods.

We treat class library changes differently from nonclass library changes. Since library classes are reusable components by many applications, it is important to assign the highest priority to the test cases that cover library class changes. Therefore, for a test case, , that cover a library class, we assign maximum weight; that is, .

For nonlibrary classes, we propose another expression to find the priority weight of a test case, , as follows: where is the number of modified/deleted and affected methods covered by , is the number of classes that cover the methods, and is the total number of methods in the program. The priority weight is proportional to both the diversity of affected methods and classes, normalized with respect to the total number of methods. That is, a selected test case is more important if it covers more methods distributed among more classes. Test cases are ordered according to the weight . This helps in selecting more important test cases first.

8. Conclusion

We have presented a regression test selection technique that is based on identifying changes in programs. This technique covers C# specific features and includes relevant .Net Features. The technique is able to select suitable subsets of test cases that are modification revealing. These subsets show a reduction in the number of tests selected for adequate levels of inclusiveness and precision. This has been demonstrated by the empirical results. Despite the limited number and size of the subject programs, the results provide confidence in the effectiveness of our technique. However, our technique also provides the tester with the flexibility to use phases and steps that are compatible with the required level of inclusiveness and precision, and allowed regression testing time.