Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 89) Show all publications
Smedberg, H., Bandaru, S., Riveiro, M. & Ng, A. H. C. (2024). Mimer: A web-based tool for knowledge discovery in multi-criteria decision support. IEEE Computational Intelligence Magazine, 19(3), 73-87
Open this publication in new window or tab >>Mimer: A web-based tool for knowledge discovery in multi-criteria decision support
2024 (English)In: IEEE Computational Intelligence Magazine, ISSN 1556-603X, E-ISSN 1556-6048, Vol. 19, no 3, p. 73-87Article in journal (Refereed) Published
Abstract [en]

Practitioners of multi-objective optimization currently lack open tools that provide decision support through knowledge discovery. There exist many software platforms for multi-objective optimization, but they often fall short of implementing methods for rigorous post-optimality analysis and knowledge discovery from the generated solutions. This paper presents Mimer, a multi-criteria decision support tool for solution exploration, preference elicitation, knowledge discovery, and knowledge visualization. Mimer is openly available as a web-based tool and uses state-of-the-art web-technologies based on WebAssembly to perform heavy computations on the client-side. Its features include multiple linked visualizations and input methods that enable the decision maker to interact with the solutions, knowledge discovery through interactive data mining and graph-based knowledge visualization. It also includes a complete Python programming interface for advanced data manipulation tasks that may be too specific for the graphical interface. Mimer is evaluated through a user study in which the participants are asked to perform representative tasks simulating practical analysis and decision making. The participants also complete a questionnaire about their experience and the features available in Mimer. The survey indicates that participants find Mimer useful for decision support. The participants also offered suggestions for enhancing some features and implementing new features to extend the capabilities of the tool.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer Sciences Information Systems Software Engineering Computer Systems Computational Mathematics
Research subject
Virtual Production Development (VPD); VF-KDO
Identifiers
urn:nbn:se:his:diva-23154 (URN)10.1109/MCI.2024.3401420 (DOI)001271410100001 ()2-s2.0-85198700093 (Scopus ID)
Funder
Knowledge Foundation, 2018-0011
Note

This work was supporetd by The Knowledge Foundation (KKS), Sweden, through the KKS Profile, Virtual Factories with Knowledge-Driven Optimization (VF-KDO) under Grant 2018-0011.

Available from: 2023-09-01 Created: 2023-09-01 Last updated: 2025-09-29Bibliographically approved
Pettersson, T., Riveiro, M. & Löfström, T. (2024). Multimodal fine-grained grocery product recognition using image and OCR text. Machine Vision and Applications, 35(4), Article ID 79.
Open this publication in new window or tab >>Multimodal fine-grained grocery product recognition using image and OCR text
2024 (English)In: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 35, no 4, article id 79Article in journal (Refereed) Published
Abstract [en]

Automatic recognition of grocery products can be used to improve customer flow at checkouts and reduce labor costs and store losses. Product recognition is, however, a challenging task for machine learning-based solutions due to the large number of products and their variations in appearance. In this work, we tackle the challenge of fine-grained product recognition by first extracting a large dataset from a grocery store containing products that are only differentiable by subtle details. Then, we propose a multimodal product recognition approach that uses product images with extracted OCR text from packages to improve fine-grained recognition of grocery products. We evaluate several image and text models separately and then combine them using different multimodal models of varying complexities. The results show that image and textual information complement each other in multimodal models and enable a classifier with greater recognition performance than unimodal models, especially when the number of training samples is limited. Therefore, this approach is suitable for many different scenarios in which product recognition is used to further improve recognition performance. The dataset can be found at https://github.com/Tubbias/finegrainocr.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Grocery product recognition, Multimodal classification, Fine-grained recognition, Optical character recognition
National Category
Production Engineering, Human Work Science and Ergonomics Computer graphics and computer vision Natural Language Processing
Research subject
Virtual Production Development (VPD)
Identifiers
urn:nbn:se:his:diva-23933 (URN)10.1007/s00138-024-01549-9 (DOI)001243616100001 ()2-s2.0-85195555790 (Scopus ID)
Funder
Knowledge Foundation, 2020-0044Swedish National Infrastructure for Computing (SNIC), 2018-05973Swedish Research CouncilUniversity of Skövde
Note

CC BY 4.0

Tobias Pettersson tobias.pettersson@itab.com

The authors would like to thank ITAB Shop Products AB and Smart Industry Sweden (KKS-2020-0044) for their support. The machine learning training was enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at C3SE, partially funded by the Swedish Research Council through grant agreement no. 2018-05973.

Open access funding provided by University of Skövde

Available from: 2024-06-10 Created: 2024-06-10 Last updated: 2025-09-29Bibliographically approved
Pettersson, T., Riveiro, M. & Löfström, T. (2024). Real-Time Automatic Checkout via Prompt-Based Product Extraction and Cross-Domain Learning. In: M. Arif Wani; Plamen Angelov; Feng Luo; Mitsunori Ogihara Xintao Wu; Radu-Emil Precup; Ramin Ramezani; Xiaowei Gu (Ed.), Proceedings 2024 International Conference on Machine Learning and Applications ICMLA 2024: Miami, Florida 18-20 December 2024. Paper presented at 2024 International Conference on Machine Learning and Applications ICMLA 2024, Miami, Florida, 18-20 December 2024 (pp. 1396-1403). IEEE
Open this publication in new window or tab >>Real-Time Automatic Checkout via Prompt-Based Product Extraction and Cross-Domain Learning
2024 (English)In: Proceedings 2024 International Conference on Machine Learning and Applications ICMLA 2024: Miami, Florida 18-20 December 2024 / [ed] M. Arif Wani; Plamen Angelov; Feng Luo; Mitsunori Ogihara Xintao Wu; Radu-Emil Precup; Ramin Ramezani; Xiaowei Gu, IEEE, 2024, p. 1396-1403Conference paper, Published paper (Refereed)
Abstract [en]

Automatic checkout systems are designed to predict a complete shopping receipt using an image from the checkout area. These systems require high classification accuracy across numerous classes and must operate in real-time, despite domain differences between training data and real-world conditions. Building on recent advancements, we propose a method that outperforms current solutions and can be applied in real-time in automatic checkout systems. Our method leverages the Segment Anything Model to extract high-quality masks from lab product images, which are then transformed into synthetic checkout images and adapted to the real domain using contrastive unpaired translation. We train a product recognition model with data augmentation, named SCA+Y8, and further improve it through fine-tuning with pseudo-labels from unlabeled checkout images, resulting in an improved model called SCAFT+Y8. SCAFT+Y8 achieves a great increase in state-of-the-art performance, with an average receipt classification accuracy of 97.58%, and shows strong performance in smaller models, indicating the potential for deployment on low-cost edge devices. 

Place, publisher, year, edition, pages
IEEE, 2024
Series
International Conference on Machine Learning and Applications (ICMLA), ISSN 1946-0740, E-ISSN 1946-0759
Keywords
Automatic Checkout, Domain Adaptation, Object Detection, YOLOv8, Contrastive Learning, Image enhancement, Image segmentation, Object recognition, Classification accuracy, Cross-domain learning, Domain differences, Objects detection, Real- time, Real-world, Training data
National Category
Computer Sciences Computer graphics and computer vision
Research subject
Virtual Production Development (VPD)
Identifiers
urn:nbn:se:his:diva-24982 (URN)10.1109/ICMLA61862.2024.00217 (DOI)001468515500208 ()2-s2.0-105000879245 (Scopus ID)979-8-3503-7489-6 (ISBN)979-8-3503-7488-9 (ISBN)
Conference
2024 International Conference on Machine Learning and Applications ICMLA 2024, Miami, Florida, 18-20 December 2024
Funder
Knowledge Foundation, 2020-0044Swedish Research Council, 2022-06725
Note

© 2024 IEEE

The authors would like to thank ITAB Shop Products AB and Smart Industry Sweden (KKS-2020-0044) for their support. The machine learning training was enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725.

Available from: 2025-04-03 Created: 2025-04-03 Last updated: 2025-09-29Bibliographically approved
Pettersson, T., Riveiro, M. & Löfström, T. (2023). Explainable Local and Global Models for Fine-Grained Multimodal Product Recognition. In: : . Paper presented at Multimodal KDD 2023: International Workshop on Multimodal Learning, held in conjunction with KDD'23, 29TH ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, August 6-10, 2023. Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Explainable Local and Global Models for Fine-Grained Multimodal Product Recognition
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Grocery product recognition techniques are emerging in the retail sector and are used to provide automatic checkout counters, reduce self-checkout fraud, and support inventory management. However, recognizing grocery products using machine learning models is challenging due to the vast number of products, their similarities, and changes in appearance. To address these challenges, more complex models are created by adding additional modalities, such as text from product packages. But these complex models pose additional challenges in terms of model interpretability. Machine learning experts and system developers need tools and techniques conveying interpretations to enable the evaluation and improvement of multimodal production recognition models. In this work, we propose thus an approach to provide local and global explanations that allow us to assess multimodal models for product recognition. We evaluate this approach on a large fine-grained grocery product dataset captured from a real-world environment. To assess the utility of our approach, experiments are conducted for three types of multimodal models. The results show that our approach provides fine-grained local explanations while being able to aggregate those into global explanations for each type of product. In addition, we observe a disparity between different multimodal models, in what type of features they learn and what modality each model focuses on. This provides valuable insight to further improve the accuracy and robustness of multimodal product recognition models for grocery product recognition.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Multimodal classification, Explainable AI, Grocery product recognition, LIME, Fine-grained recognition, Optical character recognition
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Virtual Production Development (VPD)
Identifiers
urn:nbn:se:his:diva-25773 (URN)
Conference
Multimodal KDD 2023: International Workshop on Multimodal Learning, held in conjunction with KDD'23, 29TH ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, August 6-10, 2023
Available from: 2025-08-29 Created: 2025-08-29 Last updated: 2025-10-27
Ohlander, U., Alfredson, J., Riveiro, M., Helldin, T. & Falkman, G. (2023). The Effects of Varying Degrees of Information on Teamwork: a Study on Fighter Pilots. Paper presented at International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2023 Columbia 23 October 2023 through 27 October 2023. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 67(1), 1965-1970
Open this publication in new window or tab >>The Effects of Varying Degrees of Information on Teamwork: a Study on Fighter Pilots
Show others...
2023 (English)In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, ISSN 1071-1813, E-ISSN 2169-5067, Vol. 67, no 1, p. 1965-1970Article in journal (Refereed) Published
Abstract [en]

A team of fighter pilots in a distributed environment with limited access to information rely on technology to pursue teamwork. In order to design systems that support distributed teamwork, it is, therefore, necessary to understand how access to information affects the team members. Certain factors, such as mutual performance monitoring, shared mental models, adaptability, and backup behavior are considered essential for effective teamwork. We investigate these factors in this work, focusing on how visually communicated information affects fighter pilots’ perception of these factors. For that, a questionnaire including the teamwork factors in relation to certain defined scenarios that contain various levels of information was distributed to fighter pilots. We show that the studied factors are affected by the level of information available to the pilots. Especially, mutual performance monitoring increases with the degree of available information. © 2023 Human Factors and Ergonomics Society.

Place, publisher, year, edition, pages
Sage Publications, 2023
Keywords
fighter pilots, information variation, teamwork
National Category
Information Systems Information Systems, Social aspects Other Engineering and Technologies
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-23797 (URN)10.1177/21695067231192607 (DOI)2-s2.0-85190953101 (Scopus ID)
Conference
International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2023 Columbia 23 October 2023 through 27 October 2023
Note

CC BY-NC 4.0

Correspondence Address: U. Ohlander; Saab Aeronautics, Saab AB, Linköping, Bröderna Ugglas gata, 58188, Sweden; email: ulrika.ohlander@saabgroup.com; CODEN: PHFSD

Available from: 2024-05-02 Created: 2024-05-02 Last updated: 2025-09-29Bibliographically approved
Ohlson, N.-E., Riveiro, M. & Bäckstrand, J. (2022). Identification of tasks to be supported by machine learning to reduce Sales & Operations Planning challenges in an engineer-to-order context. In: Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm (Ed.), SPS2022: Proceedings of the 10th Swedish production symposium. Paper presented at 10th Swedish Production Symposium (SPS2022), School of Engineering Science, University of Skövde, Sweden, April 26–29 2022 (pp. 39-50). Amsterdam; Berlin; Washington, DC: IOS Press
Open this publication in new window or tab >>Identification of tasks to be supported by machine learning to reduce Sales & Operations Planning challenges in an engineer-to-order context
2022 (English)In: SPS2022: Proceedings of the 10th Swedish production symposium / [ed] Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm, Amsterdam; Berlin; Washington, DC: IOS Press, 2022, p. 39-50Conference paper, Published paper (Refereed)
Abstract [en]

Sales and Operations Planning (S&OP) is a process that aims to align dimensioning efforts in a company, based on one integrated plan and with clear decision milestones. The alignment is cross-functional and connects different operations functions with each other to set an overall delivery ability. There are always challenges connecting different functions in a company which most S&OP practitioners agree with, still, that is one of the things that the S&OP-process should bridge. Digital solutions such as Enterprise Resource Planning (ERP) and other more or less sophisticated tools have contributed to an improved cross functional communication over time. S&OP in an Engineer-to-order (ETO) context, especially where engineering is a major or an equal portion as e.g., make-to-stock (MTS) and make-to-order (MTO) contexts, may experience even further challenges. Technologies within Industry 4.0 are changing the way S&OP is carried out; one of the most relevant ones is Artificial Intelligence (AI), particularly, Machine Learning (ML) that analyses data collected during these processes to find patterns and extract knowledge. The intent with this paper is to, based on S&OP-challenges, see if ML can be used to improve these challenges.

In a brief literature review together with empiric data from a single industrial case (SIC), S&OP-challenges were defined and structured. Based on the challenges in several S&OP-sub-areas, classified into data quality, horizontal and vertical disconnects, specific tasks were specified and structured into anomaly detection, clustering and classification, and predictions. Which exact ML-method to use require further work and tests. Still, this is a good starting point to take the next step and the specified tasks could also be used for other practitioners that want to start using ML/AI in their daily activities.

Place, publisher, year, edition, pages
Amsterdam; Berlin; Washington, DC: IOS Press, 2022
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528 ; 21
Keywords
Sales & Operations Planning, Engineer to Order, Machine Learning
National Category
Production Engineering, Human Work Science and Ergonomics
Research subject
VF-KDO
Identifiers
urn:nbn:se:his:diva-22302 (URN)10.3233/ATDE220124 (DOI)001191233200004 ()2-s2.0-85132814053 (Scopus ID)978-1-64368-268-6 (ISBN)978-1-64368-269-3 (ISBN)
Conference
10th Swedish Production Symposium (SPS2022), School of Engineering Science, University of Skövde, Sweden, April 26–29 2022
Note

CC BY-NC 4.0

Corresponding Author: Nils-Erik Ohlson, Jönköping University, School of Engineering, Gjuterigatan 5, SE 553 18 Jönköping, Sweden, E-mail: nilserik.ohlson@ju.se

Available from: 2022-05-02 Created: 2023-02-24 Last updated: 2025-09-29Bibliographically approved
Ohlson, N.-E., Bäckstrand, J. & Riveiro, M. (2021). Artificial Intelligence-enhanced Sales & Operations Planning in an Engineer-to-order context. In: : . Paper presented at PLANs forsknings- och tillämpningskonferens 2021, Högskolan i Borås, 20-21 oktober 2021.
Open this publication in new window or tab >>Artificial Intelligence-enhanced Sales & Operations Planning in an Engineer-to-order context
2021 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Sales and Operations Planning (S&OP) is a process that aims to align dimensioning efforts in a company, based on the "One Plan" and with clear decision milestones, where “One Plan” relates to the ultimate outcome of S&OP by integrating multiple plans. This alignment is cross functional and connects, not only sales and operations, but also different operations functions with each other, to set an overall delivery ability. There are always challenges when connecting different functions in a company, something most S&OP practitioners agree with, still, cross functional integration is one of the things that the S&OP-process addresses. For S&OP in an Engineer-to-order (ETO) context, especially where engineering is a major or an equal portion of the product as e.g., make-to-stock (MTS) or make-to-order (MTO) contexts, further complexity is added. If these businesses also have long lead times and low volumes, another perspective to the S&OP-process is given when it comes to the balance between demand and supply (DS). Digital solutions such as Enterprise Resource Planning (ERP) and other more or less sophisticated tools are a pre-requisite for the S&OP-process and improves cross functional integration. Technologies within Industry 4.0 are changing the way S&OP is carried out; one of the most relevant one is Artificial Intelligence (AI), particularly, Machine Learning (ML) that analyses data collected during these processes to find patterns and extract knowledge.

 Therefore, in this paper, the purpose is to investigate and define the main sub-areas of the S&OP-process in an ETO-context and discuss how AI, in particular ML, currently supports the sub-areas. To be able to fulfil the purpose, a literature study of the two main fields, S&OP and AI, has been carried out.

 The results are pointing at an underuse of ML-techniques for S&OP. Forecasting in MTS- context is where ML is mostly used, and the most common ML-technique is Artificial Neutral Networks (ANN) which is considered as Supervised Learning. The results of this paper will serve as a starting point for further research on the efforts and effects required for improving the S&OP-process in an ETO-context and with what ML-techniques.

National Category
Production Engineering, Human Work Science and Ergonomics
Research subject
VF-KDO
Identifiers
urn:nbn:se:his:diva-22299 (URN)
Conference
PLANs forsknings- och tillämpningskonferens 2021, Högskolan i Borås, 20-21 oktober 2021
Available from: 2022-01-18 Created: 2023-02-24 Last updated: 2025-09-29Bibliographically approved
Ohlson, N.-E., Bäckstrand, J. & Riveiro, M. (2021). Artificial Intelligence-enhanced Sales & Operations Planning in an Engineer-to-order context. In: : . Paper presented at PLANs forsknings- och tillämpningskonferens 2021, Högskolan i Borås, 20-21 oktober 2021.
Open this publication in new window or tab >>Artificial Intelligence-enhanced Sales & Operations Planning in an Engineer-to-order context
2021 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Sales and Operations Planning (S&OP) is a process that aims to align dimensioning efforts in a company, based on the "One Plan" and with clear decision milestones, where “One Plan” relates to the ultimate outcome of S&OP by integrating multiple plans. This alignment is crossfunctional and connects, not only sales and operations, but also different operations functions with each other, to set an overall delivery ability. There are always challenges when connecting different functions in a company, something most S&OP practitioners agree with, still, crossfunctional integration is one of the things that the S&OP-process addresses. For S&OP in an Engineer-to-order (ETO) context, especially where engineering is a major or an equal portion of the product as e.g., make-to-stock (MTS) or make-to-order (MTO) contexts, further complexity is added. If these businesses also have long lead times and low volumes, another perspective to the S&OP-process is given when it comes to the balance between demand and supply (DS). Digital solutions such as Enterprise Resource Planning (ERP) and other more or less sophisticated tools are a pre-requisite for the S&OP-process and improves cross functional integration. Technologies within Industry 4.0 are changing the way S&OP is carried out; one of the most relevant one is Artificial Intelligence (AI), particularly, Machine Learning (ML) that analyses data collected during these processes to find patterns and extract knowledge.

Therefore, in this paper, the purpose is to investigate and define the main sub-areas of the S&OP-process in an ETO-context and discuss how AI, in particular ML, currently supports the sub-areas. To be able to fulfil the purpose, a literature study of the two main fields, S&OP and AI, has been carried out.

The results are pointing at an underuse of ML-techniques for S&OP. Forecasting in MTS-context is where ML is mostly used, and the most common ML-technique is Artificial Neutral Networks (ANN) which is considered as Supervised Learning. The results of this paper will serve as a starting point for further research on the efforts and effects required for improving the S&OP-process in an ETO-context and with what ML-techniques.

National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:his:diva-24961 (URN)
Conference
PLANs forsknings- och tillämpningskonferens 2021, Högskolan i Borås, 20-21 oktober 2021
Available from: 2022-01-18 Created: 2025-03-13 Last updated: 2025-09-29Bibliographically approved
Ulfenborg, B., Karlsson, A., Riveiro, M., Andersson, C. X., Sartipy, P. & Synnergren, J. (2021). Multi-Assignment Clustering: Machine learning from a biological perspective. Journal of Biotechnology, 326, 1-10
Open this publication in new window or tab >>Multi-Assignment Clustering: Machine learning from a biological perspective
Show others...
2021 (English)In: Journal of Biotechnology, ISSN 0168-1656, E-ISSN 1873-4863, Vol. 326, p. 1-10Article in journal (Refereed) Published
Abstract [en]

A common approach for analyzing large-scale molecular data is to cluster objects sharing similar characteristics. This assumes that genes with highly similar expression profiles are likely participating in a common molecular process. Biological systems are extremely complex and challenging to understand, with proteins having multiple functions that sometimes need to be activated or expressed in a time-dependent manner. Thus, the strategies applied for clustering of these molecules into groups are of key importance for translation of data to biologically interpretable findings. Here we implemented a multi-assignment clustering (MAsC) approach that allows molecules to be assigned to multiple clusters, rather than single ones as in commonly used clustering techniques. When applied to high-throughput transcriptomics data, MAsC increased power of the downstream pathway analysis and allowed identification of pathways with high biological relevance to the experimental setting and the biological systems studied. Multi-assignment clustering also reduced noise in the clustering partition by excluding genes with a low correlation to all of the resulting clusters. Together, these findings suggest that our methodology facilitates translation of large-scale molecular data into biological knowledge. The method is made available as an R package on GitLab (https://gitlab.com/wolftower/masc).

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Clustering, K-means, annotation enrichment, multiple cluster assignment, pathways, transcriptomics
National Category
Bioinformatics and Computational Biology
Research subject
Bioinformatics; Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-19329 (URN)10.1016/j.jbiotec.2020.12.002 (DOI)000616124700001 ()33285150 (PubMedID)2-s2.0-85097644109 (Scopus ID)
Note

CC BY 4.0

Available from: 2020-12-16 Created: 2020-12-16 Last updated: 2025-09-29Bibliographically approved
Ventocilla, E., Martins, R. M., Paulovich, F. & Riveiro, M. (2021). Scaling the Growing Neural Gas for Visual Cluster Analysis. Big Data Research, 26, Article ID 100254.
Open this publication in new window or tab >>Scaling the Growing Neural Gas for Visual Cluster Analysis
2021 (English)In: Big Data Research, ISSN 2214-5796, E-ISSN 2214-580X, Vol. 26, article id 100254Article in journal (Refereed) Published
Abstract [en]

The growing neural gas (GNG) is an unsupervised topology learning algorithm that models a data space through interconnected units that stand on the populated areas of that space. Its output is a graph that can be visually represented on a two-dimensional plane, and be used as means to disclose cluster patterns in datasets. GNG, however, creates highly connected graphs when trained on high dimensional data, which in turn leads to highly clutter representations that fail to disclose any meaningful patterns. Moreover, its sequential learning limits its potential for faster executions on local datasets, and, more importantly, its potential for training on distributed datasets while leveraging from the computational resources of the infrastructures in which they reside.

This paper presents two methods that improve GNG for the visualization of cluster patterns in large and high-dimensional datasets. The first one focuses on providing more meaningful and accurate cluster pattern representations of high-dimensional datasets, by avoiding connections that lead to high-dimensional graphs in the modeled topology, which may, in turn, lead to visual cluttering in 2D representations. The second method presented in this paper enables the use of GNG on big and distributed datasets with faster execution times, by modeling and merging separate parts of a dataset using the MapReduce model.

Quantitative and qualitative evaluations show that the first method leads to the creation of lower-dimensional graph structures, which in turn provide more accurate and meaningful cluster representations; and that the second method preserves the accuracy and meaning of the cluster representations while enabling its execution in distributed settings.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Growing neural gas, clustering, cluster patterns, visualization, mapreduce
National Category
Computer Systems
Research subject
Skövde Artificial Intelligence Lab (SAIL); VF-KDO
Identifiers
urn:nbn:se:his:diva-19460 (URN)10.1016/j.bdr.2021.100254 (DOI)000710458600012 ()2-s2.0-85113545584 (Scopus ID)
Note

CC BY 4.0

Available from: 2021-02-10 Created: 2021-02-10 Last updated: 2025-09-29Bibliographically approved
Projects
Virtual factories with knowledge-driven optimization (VF-KDO); University of Skövde; Publications
Mittermeier, L., Ng, A. H. C., Senington, R. & Jeusfeld, M. A. (2025). A Graph Database Approach for Supporting Knowledge-Driven and Simulation-Based Optimization in Industry and Academia. In: Sebastian Rank; Mathias Kühn; Thorsten Schmidt (Ed.), Simulation in Produktion und Logistik 2025: . Paper presented at 21. ASIM-Fachtagung Simulation in Produktion und Logistik, Dresden, Germany, 24–26 September 2025. Dresden: Technische Universität Dresden, Article ID 43. Iriondo Pascual, A., Högberg, D., Lebram, M., Spensieri, D., Mårdberg, P., Lämkull, D. & Ekstrand, E. (2025). Assessment of Manual Forces in Assembly of Flexible Objects by the Use of a Digital Human Modelling Tool—A Use Case. In: Russell Marshall; Steve Summerskill; Gregor Harih; Sofia Scataglini (Ed.), Advances in Digital Human Modeling II: Proceedings of the 9th International Digital Human Modeling Symposium, DHM 2025, July 29-31, 2025, Loughborough, UK. Paper presented at 9th International Digital Human Modeling Symposium, DHM 2025, July 29-31, 2025, Loughborough, UK (pp. 1-10). Cham: SpringerHögberg, D., Iriondo Pascual, A. & Lebram, M. (2025). Comparison of Recommended Force Limits for Female Work Population Given by the Assembly Specific Force Atlas and the Arm Force Field Method. In: Russell Marshall; Steve Summerskill; Gregor Harih; Sofia Scataglini (Ed.), Advances in Digital Human Modeling II: Proceedings of the 9th International Digital Human Modeling Symposium, DHM 2025, July 29-31, 2025, Loughborough, UK. Paper presented at 9th International Digital Human Modeling Symposium, DHM 2025, July 29-31, 2025, Loughborough, UK (pp. 225-237). Cham: SpringerSenington, R., Ng, A. H. C., Mittermeier, L. & Bandaru, S. (2025). Graph Databases for Group Decision Making in Industry: A Comprehensive Literature Review. IEEE Access, 13, Article ID 3596632. Iriondo Pascual, A., Holm, M., Ng, A. H. C., Larsson, F. & Olsson, J. (2025). Integrating Motion Capture and Digital Human Modelling Tools for Evaluating Worker Ergonomics - A Case Study in a Medium Size Enterprise Assembly Station. In: Masaaki Kurosu; Ayako Hashizume (Ed.), Human-Computer Interaction: Thematic Area, HCI 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, June 22–27, 2025, Proceedings, Part III. Paper presented at Thematic Area, HCI 2025, Held as Part of the 27th HCI International Conference, HCII 2025, Gothenburg, Sweden, June 22–27, 2025 (pp. 362-373). Cham: SpringerPerez Luque, E., Iriondo Pascual, A., Högberg, D., Lamb, M. & Brolin, E. (2025). Simulation-based multi-objective optimization combined with a DHM tool for occupant packaging design. International Journal of Industrial Ergonomics, 105, Article ID 103690. Kühne, T. & Jeusfeld, M. A. (2025). Supporting sound multi-level modeling — Specification and implementation of a multi-dimensional modeling approach. Data & Knowledge Engineering, 160(November 2025), Article ID 102481. Iriondo Pascual, A., Eklund, M. & Högberg, D. (2025). Towards automated hand force predictions: Use of random forest to classify hand postures. In: Sangeun Jin; Jeong Ho Kim; Yong-Ku Kong; Jaehyun Park; Myung Hwan Yun (Ed.), Proceedings of the 22nd Congress of the International Ergonomics Association, Volume 2: Better Life Ergonomics for Future Humans (IEA 2024). Paper presented at 22nd Triennial Congress of the International Ergonomics Association (IEA), Jeju, South Korea, August 25 to 29, 2024 (pp. 201-206). Singapore: SpringerDanielsson, O., Ettehad, M. & Syberfeldt, A. (2024). Augmented Reality Smart Glasses for Industry: How to Choose the Right Glasses. In: Joel Andersson; Shrikant Joshi; Lennart Malmsköld; Fabian Hanning (Ed.), Sustainable Production through Advanced Manufacturing, Intelligent Automation and Work Integrated Learning: Proceedings of the 11th Swedish Production Symposium (SPS2024). Paper presented at 11th Swedish Production Symposium, SPS 2024 Trollhättan 23 April 2024 through 26 April 2024 (pp. 289-298). IOS PressNourmohammadi, A., Fathi, M. & Ng, A. H. C. (2024). Balancing and scheduling human-robot collaborated assembly lines with layout and objective consideration. Computers & industrial engineering, 187, Article ID 109775.
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2900-9335

Search in DiVA

Show all publications