Automatic Selection of Viewpoint for Digital Human Modelling
2020 (English)In: DHM2020: Proceedings of the 6th International Digital Human Modeling Symposium, August 31 – September 2, 2020 / [ed] Lars Hanson, Dan Högberg, Erik Brolin, Amsterdam: IOS Press, 2020, p. 61-70Conference paper, Published paper (Refereed)
Abstract [en]
During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin’s grasping location.
Place, publisher, year, edition, pages
Amsterdam: IOS Press, 2020. p. 61-70
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528 ; 11
Keywords [en]
Cognitive modelling, Digital Human Modelling, Eye-hand coordination
National Category
Interaction Technologies
Research subject
Interaction Lab (ILAB)
Identifiers
URN: urn:nbn:se:his:diva-18965DOI: 10.3233/ATDE200010ISI: 000680825700007Scopus ID: 2-s2.0-85091213088ISBN: 978-1-64368-104-7 (print)ISBN: 978-1-64368-105-4 (electronic)OAI: oai:DiVA.org:his-18965DiVA, id: diva2:1462386
Conference
6th International Digital Human Modeling Symposium, August 31 – September 2, 2020, Skövde, Sweden
Part of project
Synergy Virtual Ergonomics (SVE), Knowledge Foundation
Funder
Knowledge Foundation, 20180167
Note
CC BY-NC 4.0
Funder: Knowledge Foundation and the INFINIT research environment (KKS Dnr. 20180167). This work was financially supported by the synergy Virtual Ergonomics funded by the Swedish Knowledge Foundation, dnr 20180167. https://www.his.se/sve
2020-08-292020-08-292021-09-06Bibliographically approved