Open this publication in new window or tab >>Show others...
2023 (English)In: Software testing, verification & reliability, ISSN 0960-0833, E-ISSN 1099-1689, Vol. 33, no 8, article id e1860Article, review/survey (Refereed) Published
Abstract [en]
Model-based test design is increasingly being applied in practice and studied in research. Model-based testing (MBT) exploits abstract models of the software behaviour to generate abstract tests, which are then transformed into concrete tests ready to run on the code. Given that abstract tests are designed to cover models but are run on code (after transformation), the effectiveness of MBT is dependent on whether model coverage also ensures coverage of key functional code. In this article, we investigate how MBT approaches generate tests from model specifications and how the coverage of tests designed strictly based on the model translates to code coverage. We used snowballing to conduct a systematic literature review. We started with three primary studies, which we refer to as the initial seeds. At the end of our search iterations, we analysed 30 studies that helped answer our research questions. More specifically, this article characterizes how test sets generated at the model level are mapped and applied to the source code level, discusses how tests are generated from the model specifications, analyses how the test coverage of models relates to the test coverage of the code when the same test set is executed and identifies the technologies and software development tasks that are on focus in the selected studies. Finally, we identify common characteristics and limitations that impact the research and practice of MBT: (i) some studies did not fully describe how tools transform abstract tests into concrete tests, (ii) some studies overlooked the computational cost of model-based approaches and (iii) some studies found evidence that bears out a robust correlation between decision coverage at the model level and branch coverage at the code level. We also noted that most primary studies omitted essential details about the experiments.
Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
model-based testing, systematic literature review, test case generation, test case transformation, test coverage criteria, Abstracting, Codes (symbols), Concretes, Model checking, Software design, Specifications, Model based testing, Model specifications, Model-based test, Test case, Test sets, Test-coverage, Software testing
National Category
Software Engineering
Research subject
Distributed Real-Time Systems
Identifiers
urn:nbn:se:his:diva-23214 (URN)10.1002/stvr.1860 (DOI)001059676500001 ()2-s2.0-85169886627 (Scopus ID)
Funder
Knowledge Foundation, 20130085
Note
© 2023 John Wiley & Sons Ltd.
Correspondence: Fabiano C. Ferrari, Rodovia Washington Luis, Km 235, São Carlos, São Paulo - Brazil.Email: fcferrari@ufscar.br
Fabiano Ferrari was partly supported by the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) - Brasil, grant #2016/21251-0 and CNPq - Brasil, grants #306310/2016-3 and #312086/2021-0. Sten Andler was partly supported by KKS (The Knowledge Foundation), by project 20130085, Testing of Critical System Characteristics (TOCSYC). Mehrdad Saadatmand was partly funded by the SmartDelta Project (more information available at https://smartdelta.org/).
2023-09-142023-09-142023-12-14Bibliographically approved