Advances in Software Engineering The latest articles from Hindawi Publishing Corporation © 2015 , Hindawi Publishing Corporation . All rights reserved. Supporting Technical Debt Cataloging with TD-Tracker Tool Thu, 17 Sep 2015 06:55:43 +0000 Technical debt (TD) is an emergent area that has stimulated academic concern. Managers must have information about debt in order to balance time-to-market advantages and issues of TD. In addition, managers must have information about TD to plan payments. Development tasks such as designing, coding, and testing generate different sorts of TD, each one with specific information. Moreover, literature review pointed out a gap in identifying and accurately cataloging technical debt. It is possible to find tools that can identify technical debt, but there is not a described solution that supports cataloging all types of debt. This paper presents an approach to create an integrated catalog of technical debts from different software development tasks. The approach allows tabulating and managing TD properties in order to support managers in the decision process. It also allows managers to track TD. The approach is implemented by TD-Tracker tool, which can integrate different TD identification tools and import identified debts. We present integrations between TD-Tracker and two external tools, used to identify potential technical debts. As part of the approach, we describe how to map the relationship between TD-Tracker and the external tools. We also show how to manage external information within TD-Tracker. Lucas Borante Foganholi, Rogério Eduardo Garcia, Danilo Medeiros Eler, Ronaldo Celso Messias Correia, and Celso Olivete Junior Copyright © 2015 Lucas Borante Foganholi et al. All rights reserved. Breaking the Web Barriers of the e-Administration Using an Accessible Digital Certificate Based on a Cryptographic Token Mon, 14 Sep 2015 14:11:18 +0000 The purpose of developing e-Government is to make public administrations more efficient and transparent and to allow citizens to more comfortably and effectively access information. Such benefits are even more important to people with a physical disability, allowing them to reduce waiting times in procedures and travel. However, it is not in widespread use among this group, as they not only harbor the same fears as other citizens, but also must cope with the barriers inherent to their disability. This research proposes a solution to help persons with disabilities access e-Government services. This work, in cooperation with the Spanish Federation of Spinal-Cord Injury Victims and the Severely Disabled, includes the development of a portal specially oriented towards people with disabilities to help them locate and access services offered by Spanish administrations. Use of the portal relies on digital authentication of users based on X.509, which are found in identity cards of Spanish citizens. However, an analysis of their use reveals that this feature constitutes a significant barrier to accessibility. This paper proposes a more accessible solution using a USB cryptographic token that can conceal from users all complexity entailed in access to certificate-based applications, while assuring the required security. Boni García, Ana Gómez, Rafael Conde, Yolanda Hernández, and Miguel Ángel Valero Copyright © 2015 Boni García et al. All rights reserved. LTTng CLUST: A System-Wide Unified CPU and GPU Tracing Tool for OpenCL Applications Wed, 19 Aug 2015 06:29:59 +0000 As computation schemes evolve and many new tools become available to programmers to enhance the performance of their applications, many programmers started to look towards highly parallel platforms such as Graphical Processing Unit (GPU). Offloading computations that can take advantage of the architecture of the GPU is a technique that has proven fruitful in recent years. This technology enhances the speed and responsiveness of applications. Also, as a side effect, it reduces the power requirements for those applications and therefore extends portable devices battery life and helps computing clusters to run more power efficiently. Many performance analysis tools such as LTTng, strace and SystemTap already allow Central Processing Unit (CPU) tracing and help programmers to use CPU resources more efficiently. On the GPU side, different tools such as Nvidia’s Nsight, AMD’s CodeXL, and third party TAU and VampirTrace allow tracing Application Programming Interface (API) calls and OpenCL kernel execution. These tools are useful but are completely separate, and none of them allow a unified CPU-GPU tracing experience. We propose an extension to the existing scalable and highly efficient LTTng tracing platform to allow unified tracing of GPU along with CPU’s full tracing capabilities. David Couturier and Michel R. Dagenais Copyright © 2015 David Couturier and Michel R. Dagenais. All rights reserved. On Using Fuzzy Linguistic 2-Tuples for the Evaluation of Human Resource Suitability in Software Development Tasks Wed, 24 Jun 2015 09:14:46 +0000 Efficient allocation of human resources to the development tasks comprising a software project is a key challenge in software project management. To address this critical issue, a systematic human resource evaluation and selection approach can be proven helpful. In this paper, a fuzzy linguistic approach is introduced to evaluate the suitability of candidate human resources (software developers) considering their technical skills (i.e., provided skills) and the technical skills required to perform a software development task (i.e., task-related skills). The proposed approach is based on qualitative evaluations which are derived in the form of fuzzy linguistic 2-tuples from a group of decision makers (project managers). The approach applies a group/similarity degree-based aggregation technique to obtain an objective aggregation of the ratings of task-related skills and provided skills. To further analyse the suitability of each candidate developer, possible skill relationships are considered, which reflect the contribution of provided skills to the capability of learning other skills. The applicability of the approach is demonstrated and discussed through an exemplar case study scenario. Vassilis C. Gerogiannis, Elli Rapti, Anthony Karageorgos, and Panos Fitsilis Copyright © 2015 Vassilis C. Gerogiannis et al. All rights reserved. Mutation Analysis Approach to Develop Reliable Object-Oriented Software Thu, 25 Dec 2014 07:16:00 +0000 In general, modern programs are large and complex and it is essential that they should be highly reliable in applications. In order to develop highly reliable software, Java programming language developer provides a rich set of exceptions and exception handling mechanisms. Exception handling mechanisms are intended to help developers build robust programs. Given a program with exception handling constructs, for an effective testing, we are to detect whether all possible exceptions are raised and caught or not. However, complex exception handling constructs make it tedious to trace which exceptions are handled and where and which exceptions are passed on. In this paper, we address this problem and propose a mutation analysis approach to develop reliable object-oriented programs. We have applied a number of mutation operators to create a large set of mutant programs with different type of faults. We then generate test cases and test data to uncover exception related faults. The test suite so obtained is applied to the mutant programs measuring the mutation score and hence verifying whether mutant programs are effective or not. We have tested our approach with a number of case studies to substantiate the efficacy of the proposed mutation analysis technique. Monalisa Sarma Copyright © 2014 Monalisa Sarma. All rights reserved. SPOT: A DSL for Extending Fortran Programs with Metaprogramming Wed, 17 Dec 2014 07:57:53 +0000 Metaprogramming has shown much promise for improving the quality of software by offering programming language techniques to address issues of modularity, reusability, maintainability, and extensibility. Thus far, the power of metaprogramming has not been explored deeply in the area of high performance computing (HPC). There is a vast body of legacy code written in Fortran running throughout the HPC community. In order to facilitate software maintenance and evolution in HPC systems, we introduce a DSL that can be used to perform source-to-source translation of Fortran programs by providing a higher level of abstraction for specifying program transformations. The underlying transformations are actually carried out through a metaobject protocol (MOP) and a code generator is responsible for translating a SPOT program to the corresponding MOP code. The design focus of the framework is to automate program transformations through techniques of code generation, so that developers only need to specify desired transformations while being oblivious to the details about how the transformations are performed. The paper provides a general motivation for the approach and explains its design and implementation. In addition, this paper presents case studies that illustrate the potential of our approach to improve code modularity, maintainability, and productivity. Songqing Yue and Jeff Gray Copyright © 2014 Songqing Yue and Jeff Gray. All rights reserved. A Cost Effective and Preventive Approach to Avoid Integration Faults Caused by Mistakes in Distribution of Software Components Mon, 15 Dec 2014 10:51:43 +0000 In distributed software and hardware environments, ensuring proper operation of software is a challenge. The complexity of distributed systems increases the number of integration faults resulting from configuration errors in the distribution of components. Therefore, the main contribution of this work is a change in perspective, where integration faults (in the context of mistakes in distribution of components) are prevented rather than being dealt with through postmortem approaches. To this purpose, this paper proposes a preventive, low cost, minimally intrusive, and reusable approach. The generation of hash tags and a built-in version control for the application are at the core of the solution. Prior to deployment, the tag generator creates a list of components along with their respective tags, which will be a part of the deployment package. During production and execution of the application, on demand, the version control compares the tag list with the components of the user station and shows the discrepancies. The approach was applied to a complex application of a large company and was proven to be successful by avoiding integration faults and reducing the diagnosis time by 65%. Luciano Lucindo Chaves Copyright © 2014 Luciano Lucindo Chaves. All rights reserved. MetricsCloud: Scaling-Up Metrics Dissemination in Large Organizations Wed, 10 Dec 2014 06:31:46 +0000 The evolving software development practices in modern software development companies often bring in more empowerment to software development teams. The empowered teams change the way in which software products and projects are measured and how the measures are communicated. In this paper we address the problem of dissemination of measurement information by designing a measurement infrastructure in a cloud environment. The described cloud system (MetricsCloud) utilizes file-sharing as the underlying mechanism to disseminate the measurement information at the company. Based on the types of measurement systems identified in this paper MetricsCloud realizes a set of synchronization strategies to fulfill the needs specific for these types. The results from migrating traditional, web-based, folder-sharing distribution of measurement systems to the cloud show that this type of measurement dissemination is flexible, reliable, and easy to use. Miroslaw Staron and Wilhelm Meding Copyright © 2014 Miroslaw Staron and Wilhelm Meding. All rights reserved. The Impact of the PSP on Software Quality: Eliminating the Learning Effect Threat through a Controlled Experiment Tue, 30 Sep 2014 10:54:35 +0000 Data from the Personal Software Process (PSP) courses indicate that the PSP improves the quality of the developed programs. However, since the programs (exercises of the course) are in the same application domain, the improvement could be due to programming repetition. In this research we try to eliminate this threat to validity in order to confirm that the quality improvement is due to the PSP. In a previous study we designed and performed a controlled experiment with software engineering undergraduate students at the Universidad de la República. The students performed the same exercises of the PSP course but without applying the PSP techniques. Here we present a replication of this experiment. The results indicate that the PSP and not programming repetition is the most plausible cause of the important software quality improvements. Fernanda Grazioli, Diego Vallespir, Leticia Pérez, and Silvana Moreno Copyright © 2014 Fernanda Grazioli et al. All rights reserved. An Improved Approach for Reduction of Defect Density Using Optimal Module Sizes Sun, 24 Aug 2014 05:46:54 +0000 Nowadays, software developers are facing challenges in minimizing the number of defects during the software development. Using defect density parameter, developers can identify the possibilities of improvements in the product. Since the total number of defects depends on module size, so there is need to calculate the optimal size of the module to minimize the defect density. In this paper, an improved model has been formulated that indicates the relationship between defect density and variable size of modules. This relationship could be used for optimization of overall defect density using an effective distribution of modules sizes. Three available data sets related to concern aspect have been examined with the proposed model by taking the distinct values of variables and parameter by putting some constraint on parameters. Curve fitting method has been used to obtain the size of module with minimum defect density. Goodness of fit measures has been performed to validate the proposed model for data sets. The defect density can be optimized by effective distribution of size of modules. The larger modules can be broken into smaller modules and smaller modules can be merged to minimize the overall defect density. Dinesh Verma and Shishir Kumar Copyright © 2014 Dinesh Verma and Shishir Kumar. All rights reserved. Model-Driven Development of Automation and Control Applications: Modeling and Simulation of Control Sequences Thu, 07 Aug 2014 09:07:55 +0000 The scope and responsibilities of control applications are increasing due to, for example, the emergence of industrial internet. To meet the challenge, model-driven development techniques have been in active research in the application domain. Simulations that have been traditionally used in the domain, however, have not yet been sufficiently integrated to model-driven control application development. In this paper, a model-driven development process that includes support for design-time simulations is complemented with support for simulating sequential control functions. The approach is implemented with open source tools and demonstrated by creating and simulating a control system model in closed-loop with a large and complex model of a paper industry process. Timo Vepsäläinen and Seppo Kuikka Copyright © 2014 Timo Vepsäläinen and Seppo Kuikka. All rights reserved. Prediction Model for Object Oriented Software Development Effort Estimation Using One Hidden Layer Feed Forward Neural Network with Genetic Algorithm Tue, 03 Jun 2014 11:15:29 +0000 The budget computation for software development is affected by the prediction of software development effort and schedule. Software development effort and schedule can be predicted precisely on the basis of past software project data sets. In this paper, a model for object-oriented software development effort estimation using one hidden layer feed forward neural network (OHFNN) has been developed. The model has been further optimized with the help of genetic algorithm by taking weight vector obtained from OHFNN as initial population for the genetic algorithm. Convergence has been obtained by minimizing the sum of squared errors of each input vector and optimal weight vector has been determined to predict the software development effort. The model has been empirically validated on the PROMISE software engineering repository dataset. Performance of the model is more accurate than the well-established constructive cost model (COCOMO). Chandra Shekhar Yadav and Raghuraj Singh Copyright © 2014 Chandra Shekhar Yadav and Raghuraj Singh. All rights reserved. Recovering Software Design from Interviews Using the NFR Approach: An Experience Report Thu, 17 Apr 2014 11:37:23 +0000 In the US Air Force there exist several systems for which design documentation does not exist. Chief reasons for this lack of system documentation include software having been developed several decades ago, natural evolution of software, and software existing mostly in its binary versions. However, the systems are still being used and the US Air Force would like to know the actual designs for the systems so that they may be reengineered for future requirements. Any knowledge of such systems lies mostly with its users and managers. A project was commissioned to recover designs for such systems based on knowledge of systems obtained from stakeholders by interviewing them. In this paper we describe our application of the NFR Approach, where NFR stands for Nonfunctional Requirements, to recover software design of a middleware system used by the Air Force called the Phoenix system. In our project we interviewed stakeholders of the Phoenix system, applied the NFR Approach to recover design artifacts, and validated the artifacts with the design engineers of the Phoenix system. Our study indicated that there was a high correlation between the recovered design and the actual design of the Phoenix system. Nary Subramanian, Steven Drager, and William McKeever Copyright © 2014 Nary Subramanian et al. All rights reserved. Tuning of Cost Drivers by Significance Occurrences and Their Calibration with Novel Software Effort Estimation Method Tue, 31 Dec 2013 13:45:57 +0000 Estimation is an important part of software engineering projects, and the ability to produce accurate effort estimates has an impact on key economic processes, including budgeting and bid proposals and deciding the execution boundaries of the project. Work in this paper explores the interrelationship among different dimensions of software projects, namely, project size, effort, and effort influencing factors. The study aims at providing better effort estimate on the parameters of modified COCOMO along with the detailed use of binary genetic algorithm as a novel optimization algorithm. Significance of 15 cost drivers can be shown by their impact on MMRE of efforts on original 63 NASA datasets. Proposed method is producing tuned values of the cost drivers, which are effective enough to improve the productivity of the projects. Prediction at different levels of MRE for each project reflects the percentage of projects with desired accuracy. Furthermore, this model is validated on two different datasets which represents better estimation accuracy as compared to the COCOMO 81 based NASA 63 and NASA 93 datasets. Brajesh Kumar Singh, Shailesh Tiwari, K. K. Mishra, and A. K. Misra Copyright © 2013 Brajesh Kumar Singh et al. All rights reserved. Thematic Review and Analysis of Grounded Theory Application in Software Engineering Tue, 22 Oct 2013 11:33:15 +0000 We present metacodes, a new concept to guide grounded theory (GT) research in software engineering. Metacodes are high level codes that can help software engineering researchers guide the data coding process. Metacodes are constructed in the course of analyzing software engineering papers that use grounded theory as a research methodology. We performed a high level analysis to discover common themes in such papers and discovered that GT had been applied primarily in three software engineering disciplines: agile development processes, geographically distributed software development, and requirements engineering. For each category, we collected and analyzed all grounded theory codes and created, following a GT analysis process, what we call metacodes that can be used to drive further theory building. This paper surveys the use of grounded theory in software engineering and presents an overview of successes and challenges of applying this research methodology. Omar Badreddin Copyright © 2013 Omar Badreddin. All rights reserved. A Granular Hierarchical Multiview Metrics Suite for Statecharts Quality Sun, 22 Sep 2013 10:41:48 +0000 This paper presents a bottom-up approach for a multiview measurement of statechart size, topological properties, and internal structural complexity for understandability prediction and assurance purposes. It tackles the problem at different conceptual depths or equivalently at several abstraction levels. The main idea is to study and evaluate a statechart at different levels of granulation corresponding to different conceptual depth levels or levels of details. The higher level corresponds to a flat process view diagram (depth = 0), the adequate upper depth limit is determined by the modelers according to the inherent complexity of the problem under study and the level of detail required for the situation at hand (it corresponds to the all states view). For purposes of measurement, we proceed using bottom-up strategy starting with all state view diagram, identifying and measuring its deepest composite states constituent parts and then gradually collapsing them to obtain the next intermediate view (we decrement depth) while aggregating measures incrementally, until reaching the flat process view diagram. To this goal we first identify, define, and derive a relevant metrics suite useful to predict the level of understandability and other quality aspects of a statechart, and then we propose a fuzzy rule-based system prototype for understandability prediction, assurance, and for validation purposes. Mokhtar Beldjehem Copyright © 2013 Mokhtar Beldjehem. All rights reserved. A New Software Development Methodology for Clinical Trial Systems Thu, 21 Mar 2013 17:38:01 +0000 Clinical trials are crucial to modern healthcare industries, and information technologies have been employed to improve the quality of data collected in trials and reduce the overall cost of data processing. While developing software for clinical trials, one needs to take into account the similar patterns shared by all clinical trial software. Such patterns exist because of the unique properties of clinical trials and the rigorous regulations imposed by the government for the reasons of subject safety. Among the existing software development methodologies, none, unfortunately, was built specifically upon these properties and patterns and therefore works sufficiently well. In this paper, the process of clinical trials is reviewed, and the unique properties of clinical trial system development are explained thoroughly. Based on the properties, a new software development methodology is then proposed specifically for developing electronic clinical trial systems. A case study shows that, by adopting the proposed methodology, high-quality software products can be delivered on schedule within budget. With such high-quality software, data collection, management, and analysis can be more efficient, accurate, and inexpensive, which in turn will improve the overall quality of clinical trials. Li-Min Liu Copyright © 2013 Li-Min Liu. All rights reserved. Gesture Recognition Using Neural Networks Based on HW/SW Cosimulation Platform Sun, 24 Feb 2013 11:25:10 +0000 Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient. Priyanka Mekala, Jeffrey Fan, Wen-Cheng Lai, and Ching-Wen Hsue Copyright © 2013 Priyanka Mekala et al. All rights reserved. Accountability in Enterprise Mashup Services Mon, 21 Jan 2013 09:38:55 +0000 As a result of the proliferation of Web 2.0 style web sites, the practice of mashup services has become increasingly popular in the web development community. While mashup services bring flexibility and speed in delivering new valuable services to consumers, the issue of accountability associated with the mashup practice remains largely ignored by the industry. Furthermore, realizing the great benefits of mashup services, industry leaders are eagerly pushing these solutions into the enterprise arena. Although enterprise mashup services hold great promise in delivering a flexible SOA solution in a business context, the lack of accountability in current mashup solutions may render this ineffective in the enterprise environment. This paper defines accountability for mashup services, analyses the underlying issues in practice, and finally proposes a framework and ontology to model accountability. This model may then be used to develop effective accountability solutions for mashup environments. Compared to the traditional method of using QoS or SLA monitoring to address accountability requirements, our approach addresses more fundamental aspects of accountability specification to facilitate machine interpretability and therefore enabling automation in monitoring. Joe Zou and Chris Pavlovski Copyright © 2013 Joe Zou and Chris Pavlovski. All rights reserved. Combining Slicing and Constraint Solving for Better Debugging: The CONBAS Approach Mon, 31 Dec 2012 16:40:08 +0000 Although slices provide a good basis for analyzing programs during debugging, they lack in their capabilities providing precise information regarding the most likely root causes of faults. Hence, a lot of work is left to the programmer during fault localization. In this paper, we present an approach that combines an advanced dynamic slicing method with constraint solving in order to reduce the number of delivered fault candidates. The approach is called Constraints Based Slicing (CONBAS). The idea behind CONBAS is to convert an execution trace of a failing test case into its constraint representation and to check if it is possible to find values for all variables in the execution trace so that there is no contradiction with the test case. For doing so, we make use of the correctness and incorrectness assumptions behind a diagnosis, the given failing test case. Beside the theoretical foundations and the algorithm, we present empirical results and discuss future research. The obtained empirical results indicate an improvement of about 28% for the single fault and 50% for the double-fault case compared to dynamic slicing approaches. Birgit Hofer and Franz Wotawa Copyright © 2012 Birgit Hofer and Franz Wotawa. All rights reserved. Applying a Goal Programming Model to Support the Selection of Artifacts in a Testing Process Sun, 30 Dec 2012 08:22:52 +0000 This paper proposes the definition of a goal programming model for the selection of artifacts to be developed during a testing process, so that the set of selected artifacts is more viable to the reality of micro and small enterprises. This model was based on the IEEE Standard 829, which establishes a set of artifacts that must be generated throughout the test activities. Several factors can influence the definition of this set of artifacts. Therefore, in order to consider such factors, we developed a multicriteria model that helps in determining the priority of artifacts according to the reality of micro and small enterprises. Andreia Rodrigues da Silva, Fernando Rodrigues de Almeida Júnior, and Placido Rogerio Pinheiro Copyright © 2012 Andreia Rodrigues da Silva et al. All rights reserved. An SOA-Based Model for the Integrated Provisioning of Cloud and Grid Resources Tue, 20 Nov 2012 17:55:19 +0000 In the last years, the availability and models of use of networked computing resources within reach of e-Science are rapidly changing and see the coexistence of many disparate paradigms: high-performance computing, grid, and recently cloud. Unfortunately, none of these paradigms is recognized as the ultimate solution, and a convergence of them all should be pursued. At the same time, recent works have proposed a number of models and tools to address the growing needs and expectations in the field of e-Science. In particular, they have shown the advantages and the feasibility of modeling e-Science environments and infrastructures according to the service-oriented architecture. In this paper, we suggest a model to promote the convergence and the integration of the different computing paradigms and infrastructures for the dynamic on-demand provisioning of resources from multiple providers as a cohesive aggregate, leveraging the service-oriented architecture. In addition, we propose a design aimed at endorsing a flexible, modular, workflow-based computing model for e-Science. The model is supplemented by a working prototype implementation together with a case study in the applicative domain of bioinformatics, which is used to validate the presented approach and to carry out some performance and scalability measurements. Andrea Bosin Copyright © 2012 Andrea Bosin. All rights reserved. Towards Self-Adaptive KPN Applications on NoC-Based MPSoCs Mon, 19 Nov 2012 17:18:09 +0000 Self-adaptivity is the ability of a system to adapt itself dynamically to internal and external changes. Such a capability helps systems to meet the performance and quality goals, while judiciously using available resources. In this paper, we propose a framework to implement application level self-adaptation capabilities in KPN applications running on NoC-based MPSoCs. The monitor-controller-adapter mechanism is used at the application level. The monitor measures various parameters to check whether the system meets the assigned goals. The controller takes decisions to steer the system towards the goal, which are applied by the adapters. The proposed framework requires minimal modifications to the application code and offers ease of integration. It incorporates a generic adaptation controller based on fuzzy logic. We present the MJPEG encoder as a case study to demonstrate the effectiveness of the approach. Our results show that even if the parameters of the fuzzy controller are not tuned optimally, the adaptation convergence is achieved within reasonable time and error limits. Moreover, the incurred steady-state overhead due to the framework is 4% for average frame-rate, 3.5% for average bit-rate, and 0.5% for additional control data introduced in the network. Onur Derin, Prasanth Kuncheerath Ramankutty, Paolo Meloni, and Emanuele Cannella Copyright © 2012 Onur Derin et al. All rights reserved. Assessing the Open Source Development Processes Using OMM Thu, 04 Oct 2012 12:47:09 +0000 The assessment of development practices in Free Libre Open Source Software (FLOSS) projects can contribute to the improvement of the development process by identifying poor practices and providing a list of necessary practices. Available assessment methods (e.g., Capability Maturity Model Integration (CMMI)) do not address sufficiently FLOSS-specific aspects (e.g., geographically distributed development, importance of the contributions, reputation of the project, etc.). We present a FLOSS-focused, CMMI-like assessment/improvement model: the QualiPSo Open Source Maturity Model (OMM). OMM focuses on the development process. This makes it different from existing assessment models that are focused on the assessment of the product. We have assessed six FLOSS projects using OMM. Three projects were started and led by a software company, and three are developed by three different FLOSS communities. We identified poorly addressed development activities as the number of commit/bug reports, the external contributions, and the risk management. The results showed that FLOSS projects led by companies adopt standard project management approaches as product planning, design definition, and testing, that are less often addressed by community led FLOSS projects. The OMM is valuable for both the FLOSS community, by identifying critical development activities necessary to be improved, and for potential users that can better decide which product to adopt. Etiel Petrinja and Giancarlo Succi Copyright © 2012 Etiel Petrinja and Giancarlo Succi. All rights reserved. A Multi-Layered Control Approach for Self-Adaptation in Automotive Embedded Systems Thu, 04 Oct 2012 11:31:16 +0000 We present an approach for self-adaptation in automotive embedded systems using a hierarchical, multi-layered control approach. We model automotive systems as a set of constraints and define a hierarchy of control loops based on different criteria. Adaptations are performed at first locally on a lower layer of the architecture. If this fails due to the restricted scope of the control cycle, the next higher layer is in charge of finding a suitable adaptation. We compare different options regarding responsibility split in multi-layered control in a self-healing scenario with a setup adopted from automotive in-vehicle networks. We show that a multi-layer control approach has clear performance benefits over a central control, even though all layers work on the same set of constraints. Furthermore, we show that a responsibility split with respect to network topology is preferable over a functional split. Marc Zeller and Christian Prehofer Copyright © 2012 Marc Zeller and Christian Prehofer. All rights reserved. Improving Model Checking with Context Modelling Mon, 24 Sep 2012 08:02:34 +0000 This paper deals with the problem of the usage of formal techniques, based on model checking, where models are large and formal verification techniques face the combinatorial explosion issue. The goal of the approach is to express and verify requirements relative to certain context situations. The idea is to unroll the context into several scenarios and successively compose each scenario with the system and verify the resulting composition. We propose to specify the context in which the behavior occurs using a language called CDL (Context Description Language), based on activity and message sequence diagrams. The properties to be verified are specified with textual patterns and attached to specific regions in the context. The central idea is to automatically split each identified context into a set of smaller subcontexts and to compose them with the model to be validated. For that, we have implemented a recursive splitting algorithm in our toolset OBP (Observer-based Prover). This paper shows how this combinatorial explosion could be reduced by specifying the environment of the system to be validated. Philippe Dhaussy, Frédéric Boniol, Jean-Charles Roger, and Luka Leroux Copyright © 2012 Philippe Dhaussy et al. All rights reserved. Metadata for Approximate Query Answering Systems Mon, 03 Sep 2012 08:23:42 +0000 In business intelligence systems, data warehouse metadata management and representation are getting more and more attention by vendors and designers. The standard language for the data warehouse metadata representation is the Common Warehouse Metamodel. However, business intelligence systems include also approximate query answering systems, since these software tools provide fast responses for decision making on the basis of approximate query processing. Currently, the standard meta-model does not allow to represent the metadata needed by approximate query answering systems. In this paper, we propose an extension of the standard metamodel, in order to define the metadata to be used in online approximate analytical processing. These metadata have been successfully adopted in ADAP, a web-based approximate query answering system that creates and uses statistical data profiles. Francesco Di Tria, Ezio Lefons, and Filippo Tangorra Copyright © 2012 Francesco Di Tria et al. All rights reserved. Software Quality Assurance Methodologies and Techniques Tue, 28 Aug 2012 13:15:25 +0000 Chin-Yu Huang, Hareton Leung, Wu-Hon Francis Leung, and Osamu Mizuno Copyright © 2012 Chin-Yu Huang et al. All rights reserved. A Stateful Approach to Generate Synthetic Events from Kernel Traces Wed, 15 Aug 2012 14:11:04 +0000 We propose a generic synthetic event generator from kernel trace events. The proposed method makes use of patterns of system states and environment-independent semantic events rather than platform-specific raw events. This method can be applied to different kernel and user level trace formats. We use a state model to store intermediate states and events. This stateful method supports partial trace abstraction and enables users to seek and navigate through the trace events and to abstract out the desired part. Since it uses the current and previous values of the system states and has more knowledge of the underlying system execution, it can generate a wide range of synthetic events. One of the obvious applications of this method is the identification of system faults and problems that will appear later in this paper. We will discuss the architecture of the method, its implementation, and the performance results. Naser Ezzati-Jivan and Michel R. Dagenais Copyright © 2012 Naser Ezzati-Jivan and Michel R. Dagenais. All rights reserved. Genetic Programming for Automating the Development of Data Management Algorithms in Information Technology Systems Thu, 05 Jul 2012 10:29:54 +0000 Information technology (IT) systems are present in almost all fields of human activity, with emphasis on processing, storage, and handling of datasets. Automated methods to provide access to data stored in databases have been proposed mainly for tasks related to knowledge discovery and data mining (KDD). However, for this purpose, the database is used only to query data in order to find relevant patterns associated with the records. Processes modelled on IT systems should manipulate the records to modify the state of the system. Linear genetic programming for databases (LGPDB) is a tool proposed here for automatic generation of programs that can query, delete, insert, and update records on databases. The obtained results indicate that the LGPDB approach is able to generate programs for effectively modelling processes of IT systems, opening the possibility of automating relevant stages of data manipulation, and thus allowing human programmers to focus on more complex tasks. Gabriel A. Archanjo and Fernando J. Von Zuben Copyright © 2012 Gabriel A. Archanjo and Fernando J. Von Zuben. All rights reserved.