Evaluación de herramientas automatizadas para pruebas de software basadas en modelos

Estado: 
Número de proyecto: 
834-B7-749
Vigencia:
De 01/Mar/2017 hasta 28/Feb/2020

Objetivo:

Caracterizar y evaluar empíricamente herramientas de automatización de pruebas de software basadas en modelos.


Descripción:

Este proyecto tiene como objetivo caracterizar y evaluar herramientas automatizadas de pruebas de software basadas en modelos. Las metodologías que se usarán son tomadas de la ingeniería de software empírica, en particular, se hará una revisión sistemática de literatura, un cuasi-experimento y un caso de estudio.

Primeramente, se hará una caracterización de las principales herramientas de automatización de pruebas de software basadas en modelos mediante una revisión sistemática de literatura. En segundo lugar, se hará una comparación empírica del desempeño de tres herramientas de automatización de pruebas de software basadas en modelos, mediante un cuasi-experimento. En tercer lugar, se realizará un caso de estudio de la aplicación de una de
estas herramientas en una organización de software.

Este proyecto desarrollará experticia en el tema de pruebas automatizadas basadas en modelos e ingeniería de software empírica, que podrá ser usada en la ECCI y el PCI para generar TFIAS, tesis, futuros proyectos de investigación relacionados y colaboraciones con otras universidades estatales y extranjeras que trabajan en estos temas. Finalmente, mediante este proyecto podremos incidir en la industria de software nacional, con el
fin de generar evidencia sobre la implementación de algunas de estas técnicas y herramientas que les sean de utilidad.

Impacto del proyecto

El impacto del proyecto se realizará mediante el cumplimiento de los objetivos específicos y sus metas, lo cual se reportará en el informe final. El impacto en la docencia podrá ser valorado, a mediano plazo, con base en la cantidad de cursos de grado o posgrado en los cuales se introduzcan mejoras o cambios derivados de esta investigación.

Investigador principal
Dra. Alexandra Martínez Porras

Colaboradores
Dr. Marcelo Jenkins Coronas
Dr. Christian Quesada-López
Leonardo Villalobos Arias
Dra. Alexandra Martínez Porras

Unidad académica base
Centro de Investigaciones en Tecnologías de la Información y Comunicación (CITIC)

Unidades académicas colaboradoras
Escuela de Ciencias de la Computación e Informática (ECCI)

Publicaciones asociadas

Evaluation of a model-based testing platform for Java applications

Descripción:

Model-based testing (MBT) automates the design and generation of test cases from a model. This process includes model building, test selection criteria, test case generation, and test case execution stages. Current tools support this process at various levels of automation, most of them supporting three out of four stages. Among them is MBT4J, a platform that extends ModelJUnit with several techniques, offering a high level of automation for testing Java applications. In this study, the authors evaluate the efficacy of the MBT4J platform, in terms of the number of test cases generated, errors detected, and coverage metrics. A case study is conducted using two open-source Java systems from public repositories, and 15 different configurations. MBT4J was able to automatically generate five models from the source code. It was also able to generate up to 2025 unique test cases for one system and up to 1044 for the other, resulting in 167 and 349 failed tests, respectively. Transition and transition pair coverage reached 100% for all models. Code coverage ranged between 72 and 84% for the one system and between 59 and 76% for the other. The study found that Greedy and Random were the most effective testers for finding errors.

Tipo de publicación: Journal Article

Publicado en: IET Software

Evaluating a model-based software testing approach in an industrial context: A replicated study

Descripción:

Software organizations are continuously looking for techniques to increase the effectiveness and efficiency of their testing processes. Model-based testing (MBT) is an approach that automates the design and generation of test cases based on a model that represents the system under test. MBT can reduce the cost of software testing and improve the systems quality. However, the introduction of the MBT approach could be complex for software development teams in the industry. This paper replicates a previous study that evaluated the use of MBT by software engineers in an industrial context. The goal of this replication is to evaluate the feasibility and acceptance of the MBT approach from the perspective of quality engineers testing a software application in the industry. We conducted a case study with four quality assurance engineers who modeled one module of the system under test, and then generated and executed a set of test cases using an MBT tool. Participants were able to use MBT to model and test the software system and provided several insights about the challenges and opportunities of using this approach.

Tipo de publicación: Conference Paper

Publicado en: 14th Iberian Conference on Information Systems and Technologies (CISTI)

Model-based testing areas, tools and challenges: A tertiary study

Descripción:

Context: Model-based testing is one of the most studied approaches by secondary studies in the area of software testing. Aggregating knowledge from secondary studies on model- based testing can be useful for both academia and industry. 

Objective: The goal of this study is to characterize secondary studies in model-based testing, in terms of the areas, tools and challenges they have investigated. 

Method: We conducted a tertiary study following the guidelines for systematic mapping studies. Our mapping included 22 secondary studies, of which 12 were literature surveys and 10 systematic reviews, over the period 1996–2016. 

Results: A hierarchy of model-based testing areas and subareas was built based on existing taxonomies as well as data that emerged from the secondary studies themselves. This hierarchy was then used to classify studies, tools, challenges and their tendencies in a unified classification scheme. We found that the two most studied areas are UML models and transition-based notations, both being modeling paradigms. Regarding tendencies of areas in time, we found two areas with constant activity through time, namely, test objectives and model specification. With respect to tools, we only found five studies that compared and classified model-based testing tools. These tools have been classified into common dimensions that mainly refer to the model type and phases of the model-based testing process they support. We reclassified all the tools into the hierarchy of model-based testing areas we proposed, and found that most tools were reported within the modeling paradigm area. With regard to tendencies of tools, we found that tools for testing the functional behavior of software have prevailed over time. Another finding was the shift from tools that support the generation of abstract tests to those that support the generation of executable tests. For analyzing challenges, we used six categories that emerged from the data (based on a grounded analysis): efficacy, availability, complexity, professional skills, investment, cost & effort, and evaluation & empirical evidence. We found that most challenges were related to availability. Besides, we too classified challenges according to our hierarchy of model-based testing areas, and found that most challenges fell in the model specification area. With respect to tendencies in challenges, we found they have moved from complexity of the approaches to the lack of approaches for specific software domains. 

Conclusions: Only a few systematic reviews on model-based testing could be found, therefore some areas still lack secondary studies, particularly, test execution aspects, language types, model dynamics, as well as some modeling paradigms and generation methods. We thus encourage the community to perform further systematic reviews and mapping studies, following known protocols and reporting procedures, in order to increase the quality and quantity of empirical studies in model-based testing.

Tipo de publicación: Journal Article

Publicado en: CLEI Electronic Journal

A survey of software testing practices in Costa Rica

Descripción:

Software testing is an essential activity in software development projects for delivering high quality products. In a previous study, we reported the results of a survey of software engineering practices in the Costa Rican industry. To analyze more in depth the specific software testing practices among practitioners, we replicated a previous survey conducted in South America. Our objective was to characterize the state of the practice based on practitioners use and perceived importance of software testing practices. This survey evaluated 42 testing practices grouped in three categories: processes, activities and tools. A total of 92 practitioners responded to the survey. The participants indicated that: (1) task for recording of the results of tests, documentation of test procedures and cases, and re-execution of tests when the software is modified are useful and important for software testing practitioners. (2) Acceptance and system testing are the two most useful and important testing types. (3) Tools for recording defects and the effort to fix them (bug tracking) and the availability of a test database for reuse are useful and important. Regarding the use of practices, the participants stated that (4) Planning and designing of software testing before coding and evaluating the quality of test artifacts are not a regular practice. (5) There is a lack of measurement of defect density and test coverage in the industry; and (6) tools for automatic generation of test cases and for estimating testing effort are rarely used. This study gave us a first glance at the state of the practice in software testing in a thriving and very dynamic industry that currently employs most of our computer science professionals. The benefits are twofold: for academia, it provides us with a road map to revise our academic offer, and for practitioners it provides them with a first set of data to benchmark their practices. © XXII Ibero-American Conference on Software Engineering, CIbSE 2019. All rights reserved.

Tipo de publicación: Conference Paper

Publicado en: XXII Ibero-American Conference on Software Engineering, CIbSE 2019

Evaluating Model-Based Testing in an Industrial Project: An Experience Report

Descripción:

Model-based testing (MBT) is an approach that automates the design and generation of test cases based on a model that represents the system under test. MBT can reduce the cost of software testing and improve the quality of systems in the industry. The goal of this study is to evaluate the use of MBT in an industrial project with the purpose of analyzing its efficiency, efficacy and acceptance by software engineers. A case study was conducted where six software engineers modeled one module of a system, and then generated and executed the test cases using an MBT tool. Our results show that participants were able to model at least four functional requirements each, in a period of 20 to 60 min, reaching a code coverage between 39% and 59% of the system module. We discussed relevant findings about the completeness of the models and common mistakes made during the modeling and concretization phases. Regarding the acceptance of MBT by participants, our results suggest that while they saw value in the MBT approach, they were not satisfied with the tool used (MISTA), because it did not support key industry needs.

Tipo de publicación: Conference Paper

Publicado en: Advances in Intelligent Systems and Computing

Using Model-Based Testing to Reduce Test Automation Technical Debt: An Industrial Experience Report

Descripción:

Technical debt is the metaphor used to describe the effect of incomplete or immature software artifacts that bring short-term benefits to projects, but may have to be paid later with interest. Software testing cost is proven to be high due to the time (and resource)-consuming activities involved. Test automation is a strategy that can potentially reduce this cost and provide savings to the software development process. The lack or poor implementation of a test automation approach derives in test automation debt. The goal of this paper is to report our experience using a model-based testing (MBT) approach on two industrial legacy applications and assess its impact on test automation debt reduction. We selected two legacy systems exhibiting high test automation debt, then used a MBT tool to model the systems and automatically generate test cases. We finally assessed the impact of this approach on the test automation technical debt by analyzing the code coverage attained by the tests and by surveying development team perceptions. Our results show that test automation debt was reduced by adding a suite of automated tests and reaching more than 75% of code coverage. Moreover, the development team agrees in that MBT could help reduce other types of technical debt present in legacy systems, such as documentation debt and design debt. Although our results are promising, more studies are needed to validate our findings.

Tipo de publicación: Conference Paper

Publicado en: Advances in Intelligent Systems and Computing

Incorporando pruebas basadas en modelos para servicios web en un proceso de desarrollo ágil: Un caso de estudio en la industria

Descripción:

Los equipos ágiles enfrentan dificultades para poder realizar pruebas de software a profundidad, dadas las iteraciones cortas de desarrollo. En muchos casos, las pruebas para servicios web se realizan manualmente, consumen mucho tiempo y requieren la experiencia de los miembros del equipo. Un enfoque de pruebas basadas en modelos, que permita la automatización de estas pruebas, podría mejorar la eficiencia del proceso y la calidad de los productos sin embargo, su adopción no debería contravenir los valores, principios y prácticas de las metodologías ágiles. En este caso de estudio discutimos el proceso realizado para incorporar pruebas basadas en modelos para automatizar las pruebas de servicios web en un equipo que implementa prácticas ágiles, y analizamos su efectividad al usar la herramienta TestOptimal sobre servicios web RESTful. Asimismo, discutimos las percepciones de los miembros del equipo, los retos y oportunidades de uso de este tipo de enfoques en equipos ágiles. Los resultados indican que las pruebas basadas en modelos permiten aumentar la cantidad de casos de prueba y defectos encontrados. Por su parte, los miembros del equipo consideran que para aumentar la aceptación de estos enfoques durante el desarrollo de un proyecto ágil, son esenciales el conocimiento para el modelado y las herramientas de apoyo. A pesar de que se logra una mejora en la generación de casos de pruebas automatizados y en la detección de errores, las pruebas basadas en modelos se perciben como un enfoque complejo de aplicar.

Tipo de publicación: Magazine Article

Publicado en: Revista Ibérica de Sistemas e Tecnologias de Informação

Comparing the effort and effectiveness of automated and manual tests

Descripción:

This paper presents three case studies that compare the effort and effectiveness of automated versus manual testing, in the context of a multinational services organization. Effort is measured in terms of the total test time, which includes script creation and test execution in the case of automated testing, and comprises test execution and reporting in the case of manual testing. Effectiveness is measured in terms of the number and severity of defects found. The software under test is a set of Java web applications. The testing process was carried out by two testers within the organization. Our results show that automated testing needs a higher initial effort, mainly caused by the creation of the scripts, but this cost can be amortized in time as automated tests are executed multiple times for regression testing. Results also show that automated testing is more effective than manual testing at finding defects.

Tipo de publicación: Conference Paper

Publicado en: 14th Iberian Conference on Information Systems and Technologies (CISTI)