Marcelo Jenkins Coronas

Marcelo Jenkins Coronas

Formación académica

  • Ph.D., Universidad de Delaware, U.S.A., 1992
  • M.Sc., Universidad de Delaware, U.S.A., 1989
  • Bachiller, Universidad de Costa Rica, 1986

Certificaciones:

  • ASQ Certified Software Quality Engineer (CSQE), 2003

Experiencia laboral

  • Agosto 2017 - Presente:

Puesto actual: Director del Centro de Investigaciones en Tecnologías de la Información y Comunicación (CITIC).

  • Febrero 1986 - Presente:

Puesto actual: Profesor Catedrático Institución: Escuela de Computación e Informática, Universidad de Costa Rica  Funciones:  1.Profesor del Programa de Maestría en Computación e Informática  2.Investigador en Ingeniería de software   3.Ex-Director del Programa de Maestría en Computación e Informática  4.Ex-Director la Escuela de Ciencias de la  Computación e Informática

  • Mayo 2015 - Mayo 2017:

Puesto: Ministro: Institución: Ministerio de Ciencia y Tecnología. Con permiso otorgado por la Universidad de Costa Rica.

  • Noviembre 1992 - Abril 2015:

Puesto: Consultor en informática Funciones:  1.Definición de estrategias empresariales de tecnología de información   2.Reingeniería de procesos administrativos utilizando tecnología de información   3.Implantación de sistemas de aseguramiento de la calidad para  el desarrollo de software 4.Especificación e implantación de estándares para desarrollo de software 5.Capacitación de personal en calidad de  software 6.Evaluación de nuevas tecnologías de información  7.Planificación y dirección de proyectos de tecnología de información 8.Ingeniería de procesos de software

  • Agosto 1984 - Marzo 1986:

Puesto: Analista/Programador  Empresa: INDECA Ltda.  Funciones:  1.Desarrollo de sistemas de información administrativos

Proyectos

Publicaciones

Evaluation of a model-based testing platform for Java applications

Descripción:

Model-based testing (MBT) automates the design and generation of test cases from a model. This process includes model building, test selection criteria, test case generation, and test case execution stages. Current tools support this process at various levels of automation, most of them supporting three out of four stages. Among them is MBT4J, a platform that extends ModelJUnit with several techniques, offering a high level of automation for testing Java applications. In this study, the authors evaluate the efficacy of the MBT4J platform, in terms of the number of test cases generated, errors detected, and coverage metrics. A case study is conducted using two open-source Java systems from public repositories, and 15 different configurations. MBT4J was able to automatically generate five models from the source code. It was also able to generate up to 2025 unique test cases for one system and up to 1044 for the other, resulting in 167 and 349 failed tests, respectively. Transition and transition pair coverage reached 100% for all models. Code coverage ranged between 72 and 84% for the one system and between 59 and 76% for the other. The study found that Greedy and Random were the most effective testers for finding errors.

Tipo de publicación: Journal Article

Publicado en: IET Software

Model-based testing areas, tools and challenges: A tertiary study

Descripción:

Context: Model-based testing is one of the most studied approaches by secondary studies in the area of software testing. Aggregating knowledge from secondary studies on model- based testing can be useful for both academia and industry. 

Objective: The goal of this study is to characterize secondary studies in model-based testing, in terms of the areas, tools and challenges they have investigated. 

Method: We conducted a tertiary study following the guidelines for systematic mapping studies. Our mapping included 22 secondary studies, of which 12 were literature surveys and 10 systematic reviews, over the period 1996–2016. 

Results: A hierarchy of model-based testing areas and subareas was built based on existing taxonomies as well as data that emerged from the secondary studies themselves. This hierarchy was then used to classify studies, tools, challenges and their tendencies in a unified classification scheme. We found that the two most studied areas are UML models and transition-based notations, both being modeling paradigms. Regarding tendencies of areas in time, we found two areas with constant activity through time, namely, test objectives and model specification. With respect to tools, we only found five studies that compared and classified model-based testing tools. These tools have been classified into common dimensions that mainly refer to the model type and phases of the model-based testing process they support. We reclassified all the tools into the hierarchy of model-based testing areas we proposed, and found that most tools were reported within the modeling paradigm area. With regard to tendencies of tools, we found that tools for testing the functional behavior of software have prevailed over time. Another finding was the shift from tools that support the generation of abstract tests to those that support the generation of executable tests. For analyzing challenges, we used six categories that emerged from the data (based on a grounded analysis): efficacy, availability, complexity, professional skills, investment, cost & effort, and evaluation & empirical evidence. We found that most challenges were related to availability. Besides, we too classified challenges according to our hierarchy of model-based testing areas, and found that most challenges fell in the model specification area. With respect to tendencies in challenges, we found they have moved from complexity of the approaches to the lack of approaches for specific software domains. 

Conclusions: Only a few systematic reviews on model-based testing could be found, therefore some areas still lack secondary studies, particularly, test execution aspects, language types, model dynamics, as well as some modeling paradigms and generation methods. We thus encourage the community to perform further systematic reviews and mapping studies, following known protocols and reporting procedures, in order to increase the quality and quantity of empirical studies in model-based testing.

Tipo de publicación: Journal Article

Publicado en: CLEI Electronic Journal

Identifying implied security requirements from functional requirements

Descripción:

The elicitation of software security requirements in early stages of software development life cycle is an essential task. Using security requirements templates could help practitioners to identify implied software security requirements from functional requirements in the context of a software system. In this paper, we replicated a previous study that analyzed the effectiveness of security requirements templates to support the identification of security requirements. Our objective was to evaluate this approach and compare the applicability of the previous findings. We conducted the first replication of the controlled experiment in 2015, and subsequently conducted two differentiated replications in 2018. We evaluated the responses of 33 participants in terms of quality, coverage, relevance and efficiency and discussed insights regarding the impact of context factors. Participants were divided into treatment (security requirements templates) and control groups (no templates). Our findings support some previous results: treatment group performed significantly better than the control group in terms of the coverage of the identified security requirements. Besides, the requirements elicitation process performed significantly better in relevance and efficiency metrics in two of the three replications. Security requirements templates supported participants to identify a core set of the security requirements and participants were favorable towards the use of templates in identifying security requirements.

Tipo de publicación: Conference Paper

Publicado en: 14th Iberian Conference on Information Systems and Technologies (CISTI)

A survey of software testing practices in Costa Rica

Descripción:

Software testing is an essential activity in software development projects for delivering high quality products. In a previous study, we reported the results of a survey of software engineering practices in the Costa Rican industry. To analyze more in depth the specific software testing practices among practitioners, we replicated a previous survey conducted in South America. Our objective was to characterize the state of the practice based on practitioners use and perceived importance of software testing practices. This survey evaluated 42 testing practices grouped in three categories: processes, activities and tools. A total of 92 practitioners responded to the survey. The participants indicated that: (1) task for recording of the results of tests, documentation of test procedures and cases, and re-execution of tests when the software is modified are useful and important for software testing practitioners. (2) Acceptance and system testing are the two most useful and important testing types. (3) Tools for recording defects and the effort to fix them (bug tracking) and the availability of a test database for reuse are useful and important. Regarding the use of practices, the participants stated that (4) Planning and designing of software testing before coding and evaluating the quality of test artifacts are not a regular practice. (5) There is a lack of measurement of defect density and test coverage in the industry; and (6) tools for automatic generation of test cases and for estimating testing effort are rarely used. This study gave us a first glance at the state of the practice in software testing in a thriving and very dynamic industry that currently employs most of our computer science professionals. The benefits are twofold: for academia, it provides us with a road map to revise our academic offer, and for practitioners it provides them with a first set of data to benchmark their practices. © XXII Ibero-American Conference on Software Engineering, CIbSE 2019. All rights reserved.

Tipo de publicación: Conference Paper

Publicado en: XXII Ibero-American Conference on Software Engineering, CIbSE 2019