Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 28) Show all publications
Igelmo, V., Syberfeldt, A., Hansson, J. & Aslam, T. (2022). Enabling Industrial Mixed Reality Using Digital Continuity: An Experiment Within Remanufacturing. In: Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm (Ed.), SPS2022: Proceedings of the 10th Swedish Production Symposium. Paper presented at 10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022 (pp. 497-507). Amsterdam; Berlin; Washington, DC: IOS Press
Open this publication in new window or tab >>Enabling Industrial Mixed Reality Using Digital Continuity: An Experiment Within Remanufacturing
2022 (English)In: SPS2022: Proceedings of the 10th Swedish Production Symposium / [ed] Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm, Amsterdam; Berlin; Washington, DC: IOS Press, 2022, p. 497-507Conference paper, Published paper (Refereed)
Abstract [en]

In the digitalisation era, overlaying digital, contextualised information on top of the physical world is essential for an efficient operation. Mixed reality (MR) is a technology designed for this purpose, and it is considered one of the critical drivers of Industry 4.0. This technology has proven to have multiple benefits in the manufacturing area, including improving flexibility, efficacy, and efficiency. Among the challenges that prevent the big-scale implementation of this technology, there is the authoring challenge, which we address by answering the following research questions: (1) “how can we fasten MR authoring in a manufacturing context?” and (2) “how can we reduce the deployment time of industrial MR experiences?”. This paper presents an experiment performed in collaboration with Volvo within the remanufacturing of truck engines. MR seems to be more valuable for remanufacturing than for many other applications in the manufacturing industry, and the authoring challenge appears to be accentuated. In this experiment, product lifecycle management (PLM) tools are used along with internet of things (IoT) platforms and MR devices. This joint system is designed to keep the information up-to-date and ready to be used when needed. Having all the necessary data cascading from the PLM platform to the MR device using IoT prevents information silos and improves the system’s overall reliability. Results from the experiment show how the interconnection of information systems can significantly reduce development and deployment time. Experiment findings include a considerable increment in the complexity of the overall IT system, the need for substantial investment in it, and the necessity of having highly qualified IT staff. The main contribution of this paper is a systematic approach to the design of industrial MR experiences.

Place, publisher, year, edition, pages
Amsterdam; Berlin; Washington, DC: IOS Press, 2022
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528 ; 21
Keywords
Mixed reality, Digital Continuity, Product Lifecycle Management, Remanufacturing, Industry 4.0
National Category
Production Engineering, Human Work Science and Ergonomics Information Systems Other Electrical Engineering, Electronic Engineering, Information Engineering Other Mechanical Engineering Computer Systems
Research subject
Production and Automation Engineering; Distributed Real-Time Systems; VF-KDO
Identifiers
urn:nbn:se:his:diva-21105 (URN)10.3233/ATDE220168 (DOI)001191233200042 ()2-s2.0-85132823251 (Scopus ID)978-1-64368-268-6 (ISBN)978-1-64368-269-3 (ISBN)
Conference
10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022
Funder
Vinnova, 2019-00787
Note

CC BY-NC 4.0

Corresponding Author: victor.igelmo.garcia@his.se

The authors wish to thank the Swedish innovation agency Vinnova and the Strategic Innovation Programme Produktion2030 (funding number 2019-00787). Likewise, the authors [wish to thank Volvo AB.]

Available from: 2022-05-02 Created: 2022-05-02 Last updated: 2024-06-19Bibliographically approved
Durisic, D., Staron, M., Tichy, M. & Hansson, J. (2019). Assessing the impact of meta-model evolution: a measure and its automotive application. Software and Systems Modeling, 18(2), 1419-1445
Open this publication in new window or tab >>Assessing the impact of meta-model evolution: a measure and its automotive application
2019 (English)In: Software and Systems Modeling, ISSN 1619-1366, E-ISSN 1619-1374, Vol. 18, no 2, p. 1419-1445Article in journal (Refereed) Published
Abstract [en]

Domain-specific meta-models play an important role in the design of large software systems by defining language for the architectural models. Such common modeling languages are particularly important if multiple actors are involved in the development process as they assure interoperability between modeling tools used by different actors. The main objective of this paper is to facilitate the adoption of new domain-specific meta-model versions, or a subset of the new architectural features they support, by the architectural modeling tools used by different actors in the development of large software systems. In order to achieve this objective, we developed a simple measure of meta-model evolution (named NoC-Number of Changes) that captures atomic modification between different versions of the analyzed meta-model. We evaluated the NoC measure on the evolution of the AUTOSAR meta-model, a domain-specific meta-model used in the design of automotive system architectures. The evaluation shows that the measure can be used as an indicator of effort needed to update meta-model-based tools to support different actors in modeling new architectural features. Our detailed results show the impact of 14 new AUTOSAR features on the modeling tools used by the main actors in the automotive development process. We validated our results by finding a significant correlation between the results of the NoC measure and the actual effort needed to support these features in the modeling tools reported by the modeling practitioners from four AUTOSAR tool vendors and the AUTOSAR tooling team at Volvo Cars. Generally, our study shows that quantitative analysis of domain-specific meta-model evolution using a simple measure such as NoC can be used as an indicator of the required updates in the meta-model-based tools that are needed to support new meta-model versions. However, our study also shows that qualitative analysis that may include an inspection of the actual meta-model changes is needed for more accurate assessment.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Domain-specific meta-models, Modeling tools, Architectural features, Software evolution, Measurement, Automotive software, AUTOSAR
National Category
Computer Sciences
Identifiers
urn:nbn:se:his:diva-16839 (URN)10.1007/s10270-017-0601-1 (DOI)000464022400029 ()2-s2.0-85019757380 (Scopus ID)
Available from: 2019-04-25 Created: 2019-04-25 Last updated: 2024-01-17Bibliographically approved
Al Mamun, M. A., Berger, C. & Hansson, J. (2019). Effects of measurements on correlations of software code metrics. Empirical Software Engineering, 24(4), 2764-2818
Open this publication in new window or tab >>Effects of measurements on correlations of software code metrics
2019 (English)In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, no 4, p. 2764-2818Article in journal (Refereed) Published
Abstract [en]

Context

Software metrics play a significant role in many areas in the life-cycle of software including forecasting defects and foretelling stories regarding maintenance, cost, etc. through predictive analysis. Many studies have found code metrics correlated to each other at such a high level that such correlated code metrics are considered redundant, which implies it is enough to keep track of a single metric from a list of highly correlated metrics.

Objective

Software is developed incrementally over a period. Traditionally, code metrics are measured cumulatively as cumulative sum or running sum. When a code metric is measured based on the values from individual revisions or commits without consolidating values from past revisions, indicating the natural development of software, this study identifies such a type of measure as organic. Density and average are two other ways of measuring metrics. This empirical study focuses on whether measurement types influence correlations of code metrics.

Method

To investigate the objective, this empirical study has collected 24 code metrics classified into four categories, according to the measurement types of the metrics, from 11,874 software revisions (i.e., commits) of 21 open source projects from eight well-known organizations. Kendall’s τ -B is used for computing correlations. To determine whether there is a significant difference between cumulative and organic metrics, Mann-Whitney U test, Wilcoxon signed rank test, and paired-samples sign test are performed.

Results

The cumulative metrics are found to be highly correlated to each other with an average coefficient of 0.79. For corresponding organic metrics, it is 0.49. When individual correlation coefficients between these two measure types are compared, correlations between organic metrics are found to be significantly lower (with p <0.01) than cumulative metrics. Our results indicate that the cumulative nature of metrics makes them highly correlated, implying cumulative measurement is a major source of collinearity between cumulative metrics. Another interesting observation is that correlations between metrics from different categories are weak.

Conclusions

Results of this study reveal that measurement types may have a significant impact on the correlations of code metrics and that transforming metrics into a different type can give us metrics with low collinearity. These findings provide us a simple understanding how feature transformation to a different measurement type can produce new non-collinear input features for predictive models.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Software code metrics, Measurement effects on correlations, Collinearity, Software engineering, Cumulative measurement
National Category
Software Engineering
Identifiers
urn:nbn:se:his:diva-17532 (URN)10.1007/s10664-019-09714-9 (DOI)000477582700029 ()2-s2.0-85066026436 (Scopus ID)
Available from: 2019-08-15 Created: 2019-08-15 Last updated: 2022-09-15Bibliographically approved
Al Mamun, M. A., Martini, A., Staron, M., Berger, C. & Hansson, J. (2019). Evolution of technical debt: An exploratory study. In: Ayca Kolukisa Tarhan, Ahmet Coskuncay (Ed.), Joint Proceedings of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM Mensura 2019): Haarlem, The Netherlands, October 7-9, 2019. Paper presented at 2019 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement, IWSM-Mensura 2019, Haarlem, The Netherlands, October 7-9, 2019 (pp. 87-102). CEUR-WS, 2476
Open this publication in new window or tab >>Evolution of technical debt: An exploratory study
Show others...
2019 (English)In: Joint Proceedings of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM Mensura 2019): Haarlem, The Netherlands, October 7-9, 2019 / [ed] Ayca Kolukisa Tarhan, Ahmet Coskuncay, CEUR-WS , 2019, Vol. 2476, p. 87-102Conference paper, Published paper (Refereed)
Abstract [en]

Context: Technical debt is known to impact maintainability of software. As source code files grow in size, maintainability becomes more challenging. Therefore, it is expected that the density of technical debt in larger files would be reduced for the sake of maintainability. Objective: This exploratory study investigates whether a newly introduced metric ‘technical debt density trend’ helps to better understand and explain the evolution of technical debt. The ‘technical debt density trend’ metric is the slope of the line of two successive ‘technical debt density’ measures corresponding to the ‘lines of code’ values of two consecutive revisions of a source code file. Method: This study has used 11,822 commits or revisions of 4,013 Java source files from 21 open source projects. For the technical debt measure, SonarQube tool is used with 138 code smells. Results: This study finds that ‘technical debt density trend’ metric has interesting characteristics that make it particularly attractive to understand the pattern of accrual and repayment of technical debt by breaking down a technical debt measure into multiple components, e.g., ‘technical debt density’ can be broken down into two components showing mean density corresponding to revisions that accrue technical debt and mean density corresponding to revisions that repay technical debt. The use of ‘technical debt density trend’ metric helps us understand the evolution of technical debt with greater insights. 

Place, publisher, year, edition, pages
CEUR-WS, 2019
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 2476
Keywords
Code debt, Code smells, Slope of technical debt density, Software metrics, Technical debt, Technical debt density, Technical debt density trend, Codes (symbols), Maintainability, Odors, Code smell, Exploratory studies, Java source files, Multiple components, Open source projects, Technical debts, Open source software
National Category
Software Engineering
Identifiers
urn:nbn:se:his:diva-17859 (URN)2-s2.0-85074108547 (Scopus ID)
Conference
2019 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement, IWSM-Mensura 2019, Haarlem, The Netherlands, October 7-9, 2019
Available from: 2019-11-07 Created: 2019-11-07 Last updated: 2020-01-29Bibliographically approved
Liebel, G., Marko, N., Tichy, M., Leitner, A. & Hansson, J. (2018). Model-based engineering in the embedded systems domain: an industrial survey on the state-of-practice. Software and Systems Modeling, 17(1), 91-113
Open this publication in new window or tab >>Model-based engineering in the embedded systems domain: an industrial survey on the state-of-practice
Show others...
2018 (English)In: Software and Systems Modeling, ISSN 1619-1366, E-ISSN 1619-1374, Vol. 17, no 1, p. 91-113Article in journal (Refereed) Published
Abstract [en]

Model-based engineering (MBE) aims at increasing the effectiveness of engineering by using models as important artifacts in the development process. While empirical studies on the use and the effects of MBE in industry exist, only few of them target the embedded systems domain. We contribute to the body of knowledge with an empirical study on the use and the assessment of MBE in that particular domain. The goal of this study is to assess the current state-of-practice and the challenges the embedded systems domain is facing due to shortcomings with MBE. We collected quantitative data from 113 subjects, mostly professionals working with MBE, using an online survey. The collected data spans different aspects of MBE, such as the used modeling languages, tools, notations, effects of MBE introduction, or shortcomings of MBE. Our main findings are that MBE is used by a majority of all participants in the embedded systems domain, mainly for simulation, code generation, and documentation. Reported positive effects of MBE are higher quality and improved reusability. Main shortcomings are interoperability difficulties between MBE tools, high training effort for developers and usability issues. Our study offers valuable insights into the current industrial practice and can guide future research in the fields of systems modeling and embedded systems.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2018
Keywords
Model-based engineering, Model-driven engineering, Embedded systems, Industry, Modeling, Empirical study, State-of-practice
National Category
Software Engineering Computer Sciences
Identifiers
urn:nbn:se:his:diva-14789 (URN)10.1007/s10270-016-0523-3 (DOI)000424654100007 ()2-s2.0-84962207101 (Scopus ID)
Note

© Springer-Verlag Berlin Heidelberg 2016

Available from: 2018-03-02 Created: 2018-03-02 Last updated: 2024-01-17Bibliographically approved
Hansson, J., Helton, S. & Feiler, P. H. (2018). ROI Analysis of the System Architecture Virtual Integration Initiative. Carnegie Mellon University, Software Engineering Institute
Open this publication in new window or tab >>ROI Analysis of the System Architecture Virtual Integration Initiative
2018 (English)Report (Other academic)
Abstract [en]

The System Architecture Virtual Integration (SAVI) initiative is a multiyear, multimillion dollar program that is developing the capability to virtually integrate systems before designs are implemented and tested on hardware. The purpose of SAVI is to develop a means of countering the costs of exponentially increasing complexity in modern aerospace software systems. The program is sponsored by the Aerospace Vehicle Systems Institute, a research center of the Texas Engineering Experiment Station, which is a member of the Texas A&M University System. This report presents an analysis of the economic effects of the SAVI approach on the development of software-reliant systems for aircraft compared to existing development paradigms. The report describes the detailed inputs and results of a return-on-investment (ROI) analysis to determine the net present value of the investment in the SAVI approach. The ROI is based on rework cost-avoidance attributed to earlier discovery of requirements errors through analysis of virtually integrated models of the embedded software system expressed in the SAE International Architecture Analysis and Design Language (AADL) standard architecture modeling language. The ROI analysis uses conservative estimates of costs and benefits, especially for those parameters that have a proven, strong correlation to overall system-development cost. The results of the analysis, in part, show that the nominal cost reduction for a system that contains 27 million source lines of code would be $2.391 billion (out of an estimated $9.176 billion), a 26.1% cost savings. The original study, reported here, had a follow-on study to validate and further refine the estimated cost savings.

Place, publisher, year, edition, pages
Carnegie Mellon University, Software Engineering Institute, 2018. p. viii, 34 s.
Series
Technical Report ; CMU/SEI-2018-TR-002
National Category
Embedded Systems Software Engineering Computer Systems Computer Sciences Computer Engineering
Identifiers
urn:nbn:se:his:diva-23301 (URN)10.1184/R1/12363080.v1 (DOI)
Note

This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center.

Available from: 2023-10-09 Created: 2023-10-09 Last updated: 2023-10-09Bibliographically approved
Al Mamun, M. A., Berger, C. & Hansson, J. (2017). Correlations of software code metrics: An empirical study. In: IWSM Mensura '17: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement. Paper presented at IWSM/Mensura '17: 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement, Gothenburg, Sweden, October, 2017 (pp. 255-266). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Correlations of software code metrics: An empirical study
2017 (English)In: IWSM Mensura '17: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement, Association for Computing Machinery (ACM), 2017, p. 255-266Conference paper, Published paper (Refereed)
Abstract [en]

Background: The increasing up-trend of software size brings about challenges related to release planning and maintainability. Foreseeing the growth of software metrics can assist in taking proactive decisions regarding different areas where software metrics play vital roles. For example, source code metrics are used to automatically calculate technical debt related to code quality which may indicate how maintainable a software is. Thus, predicting such metrics can give us an indication of technical debt in the future releases of software. Objective: Estimation or prediction of software metrics can be performed more meaningfully if the relationships between different domains of metrics and relationships between the metrics and different domains are well understood. To understand such relationships, this empirical study has collected 25 metrics classified into four domains from 9572 software revisions of 20 open source projects from 8 well-known companies. Results: We found software size related metrics are most correlated among themselves and with metrics from other domains. Complexity and documentation related metrics are more correlated with size metrics than themselves. Metrics in the duplications domain are observed to be more correlated to themselves on a domain-level. However, a metric to domain level relationship exploration reveals that metrics with most strong correlations are in fact connected to size metrics. The Overall correlation ranking of duplication metrics are least among all domains and metrics. Contribution: Knowledge earned from this research will help to understand inherent relationships between metrics and domains. This knowledge together with metric-level relationships will allow building better predictive models for software code metrics. © 2017 Association for Computing Machinery.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2017
Series
ACM International Conference Proceeding Series
Keywords
Correlation of Metrics, Software Code Metrics, Software Engineering, Spearman’s Rank Correlation, Codes (symbols), Computer software, Open source software, Different domains, Empirical studies, Open source projects, Rank correlation, Software codes, Software revisions, Source code metrics, Strong correlation, Open systems
National Category
Software Engineering
Identifiers
urn:nbn:se:his:diva-18808 (URN)10.1145/3143434.3143445 (DOI)2-s2.0-85038399512 (Scopus ID)978-1-4503-4853-9 (ISBN)
Conference
IWSM/Mensura '17: 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement, Gothenburg, Sweden, October, 2017
Available from: 2020-07-08 Created: 2020-07-08 Last updated: 2020-07-08Bibliographically approved
Antinyan, V., Staron, M., Sandberg, A. & Hansson, J. (2016). A Complexity Measure for Textual Requirements. In: Jens Heidrich & Frank Vogelezang (Ed.), Proceedings of the 26th International Workshop on Software Measurement (IWSM) and the 11th International Conference on Software Process and Product Measurement (Mensura) IWSM-Mensura 2016: . Paper presented at Joint Conference of the 26th International Workshop on Software Measurement (IWSM) and the 11th International Conference on Software Process and Product Measurement (Mensura) (IWSM-Mensura 2016), Berlin, Germany, October 5-7, 2016 (pp. 148-158). IEEE
Open this publication in new window or tab >>A Complexity Measure for Textual Requirements
2016 (English)In: Proceedings of the 26th International Workshop on Software Measurement (IWSM) and the 11th International Conference on Software Process and Product Measurement (Mensura) IWSM-Mensura 2016 / [ed] Jens Heidrich & Frank Vogelezang, IEEE, 2016, p. 148-158Conference paper, Published paper (Refereed)
Abstract [en]

Unequivocally understandable requirements are vital for software design process. However, in practice it is hard to achieve the desired level of understandability, because in large software products a substantial amount of requirements tend to have ambiguous or complex descriptions. Over time such requirements decelerate the development speed and increase the risk of late design modifications, therefore finding and improving them is an urgent task for software designers. Manual reviewing is one way of addressing the problem, but it is effort-intensive and critically slow for large products. Another way is using measurement, in which case one needs to design effective measures. In recent years there have been great endeavors in creating and validating measures for requirements understandability: most of the measures focused on ambiguous patterns. While ambiguity is one property that has major effect on understandability, there is also another important property, complexity, which also has major effect on understandability, but is relatively less investigated. In this paper we define a complexity measure for textual requirements through an action research project in a large software development organization. We also present its evaluation results in three large companies. The evaluation shows that there is a significant correlation between the measurement values and the manual assessment values of practitioners. We recommend this measure to be used with earlier created ambiguity measures as means for automated identification of complex specifications.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
measure, requirement, quality, complexity, automation
National Category
Software Engineering Computer Sciences
Identifiers
urn:nbn:se:his:diva-13549 (URN)10.1109/IWSM-Mensura.2016.030 (DOI)000399139200018 ()2-s2.0-8501196621 (Scopus ID)978-1-5090-4147-3 (ISBN)978-1-5090-4148-0 (ISBN)
Conference
Joint Conference of the 26th International Workshop on Software Measurement (IWSM) and the 11th International Conference on Software Process and Product Measurement (Mensura) (IWSM-Mensura 2016), Berlin, Germany, October 5-7, 2016
Available from: 2017-05-05 Created: 2017-05-05 Last updated: 2020-07-08Bibliographically approved
Durisic, D., Staron, M., Tichy, M. & Hansson, J. (2016). Addressing the need for strict meta-modeling in practice - A case study of AUTOSAR. In: MODELSWARD 2016 - Proceedings of the 4th International Conference on Model-Driven Engineering and Software Development: . Paper presented at 4th International Conference on Model-Driven Engineering and Software Development (MODELSWARD), 19-21 Feb. 2016, Rome, Italy (pp. 317-322). SciTePress
Open this publication in new window or tab >>Addressing the need for strict meta-modeling in practice - A case study of AUTOSAR
2016 (English)In: MODELSWARD 2016 - Proceedings of the 4th International Conference on Model-Driven Engineering and Software Development, SciTePress, 2016, p. 317-322Conference paper, Published paper (Refereed)
Abstract [en]

Meta-modeling has been a topic of interest in the modeling community for many years, yielding substantial number of papers describing its theoretical concepts. Many of them are aiming to solve the problem of traditional UML based domain-specific meta-modeling related to its non-compliance to the strict meta-modeling principle, such as the deep meta-modeling approach. In this paper, we show the practical use of meta-models in the automotive development process based on AUTOSAR and visualize places in the AUTOSAR metamodel which are broken according to the strict meta-modeling principle. We then explain how the AUTOSAR meta-modeling environment can be re-worked in order to comply to this principle by applying three individual approaches, each one combined with the concept of Orthogonal Classification Architecture: UML extension, prototypical pattern and deep instantiation. Finally we discuss the applicability of these approaches in practice and contrast the identified issues with the actual problems faced by the automotive meta-modeling practitioners. Our objective is to bridge the current gap between the theoretical and practical concerns in meta-modeling. © Copyright 2016 by SCITEPRESS - Science and Technology Publications, Lda. All rights reserved.

Place, publisher, year, edition, pages
SciTePress, 2016
Keywords
AUTOSAR, Deep Meta-modeling, Domain-specific Meta-modeling, Strict Meta-modeling Principle, Engineering, Industrial engineering, Automotive development, Domain specific, Meta model, Meta-model approach, Model communities, Non-compliance, Prototypical patterns, Software design
National Category
Computer Sciences
Identifiers
urn:nbn:se:his:diva-18805 (URN)000570754600036 ()2-s2.0-84970005627 (Scopus ID)978-989-758-168-7 (ISBN)978-989-758-232-5 (ISBN)978-1-5090-5898-3 (ISBN)
Conference
4th International Conference on Model-Driven Engineering and Software Development (MODELSWARD), 19-21 Feb. 2016, Rome, Italy
Note

Copyright © 2016 by SCITEPRESS – Science and Technology Publications, Lda.

Available from: 2020-07-08 Created: 2020-07-08 Last updated: 2021-09-24Bibliographically approved
Rana, R., Staron, M., Berger, C., Hansson, J., Nilsson, M. & Meding, W. (2016). Analyzing defect inflow distribution and applying Bayesian inference method for software defect prediction in large software projects. Journal of Systems and Software, 117, 229-244
Open this publication in new window or tab >>Analyzing defect inflow distribution and applying Bayesian inference method for software defect prediction in large software projects
Show others...
2016 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 117, p. 229-244Article in journal (Refereed) Published
Abstract [en]

Tracking and predicting quality and reliability is a major challenge in large and distributed software development projects. A number of standard distributions have been successfully used in reliability engineering theory and practice, common among these for modeling software defect inflow being exponential, Weibull, beta and Non-Homogeneous Poisson Process (NHPP). Although standard distribution models have been recognized in reliability engineering practice, their ability to fit defect data from proprietary and OSS software projects is not well understood. Lack of knowledge about underlying defect inflow distribution also leads to difficulty in applying Bayesian based inference methods for software defect prediction. In this paper we explore the defect inflow distribution of total of fourteen large software projects/release from two industrial domain and open source community. We evaluate six standard distributions for their ability to fit the defect inflow data and also assess which information criterion is practical for selecting the distribution with best fit. Our results show that beta distribution provides the best fit to the defect inflow data for all industrial projects as well as majority of OSS projects studied. In the paper we also evaluate how information about defect inflow distribution from historical projects is applied for modeling the prior beliefs/experience in Bayesian analysis which is useful for making software defect predictions early during the software project lifecycle.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
Software, SRGM, Defect Inflow
National Category
Software Engineering
Identifiers
urn:nbn:se:his:diva-12642 (URN)10.1016/j.jss.2016.02.015 (DOI)000377231800015 ()2-s2.0-84961641102 (Scopus ID)
Available from: 2016-07-01 Created: 2016-07-01 Last updated: 2020-07-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2895-0780

Search in DiVA

Show all publications