Högskolan i Skövde

his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Automatic Selection of Viewpoint for Digital Human Modelling
University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. (Interaction Lab (ILAB))ORCID iD: 0000-0002-6568-9342
University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. (Interaction Lab (ILAB))
University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment. (Interaction Lab (ILAB))ORCID iD: 0000-0003-2254-1396
2020 (English)In: DHM2020: Proceedings of the 6th International Digital Human Modeling Symposium, August 31 – September 2, 2020 / [ed] Lars Hanson, Dan Högberg, Erik Brolin, Amsterdam: IOS Press, 2020, p. 61-70Conference paper, Published paper (Refereed)
Abstract [en]

During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin’s grasping location.

Place, publisher, year, edition, pages
Amsterdam: IOS Press, 2020. p. 61-70
Series
Advances in Transdisciplinary Engineering, ISSN 2352-751X, E-ISSN 2352-7528 ; 11
Keywords [en]
Cognitive modelling, Digital Human Modelling, Eye-hand coordination
National Category
Interaction Technologies
Research subject
Interaction Lab (ILAB)
Identifiers
URN: urn:nbn:se:his:diva-18965DOI: 10.3233/ATDE200010ISI: 000680825700007Scopus ID: 2-s2.0-85091213088ISBN: 978-1-64368-104-7 (print)ISBN: 978-1-64368-105-4 (electronic)OAI: oai:DiVA.org:his-18965DiVA, id: diva2:1462386
Conference
6th International Digital Human Modeling Symposium, August 31 – September 2, 2020, Skövde, Sweden
Part of project
Synergy Virtual Ergonomics (SVE), Knowledge Foundation
Funder
Knowledge Foundation, 20180167
Note

CC BY-NC 4.0

Funder: Knowledge Foundation and the INFINIT research environment (KKS Dnr. 20180167). This work was financially supported by the synergy Virtual Ergonomics funded by the Swedish Knowledge Foundation, dnr 20180167. https://www.his.se/sve

Available from: 2020-08-29 Created: 2020-08-29 Last updated: 2021-09-06Bibliographically approved

Open Access in DiVA

fulltext(1059 kB)178 downloads
File information
File name FULLTEXT01.pdfFile size 1059 kBChecksum SHA-512
4942c9276ada49aabccb63bd63205f89410756a3a28d9bc6ba9a9864e6e2e1b1c54098f72afd90c517a25eee1334e0f5c7e69052eb9dc316df4eeb7aa5d16722
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Billing, ErikLamb, Maurice

Search in DiVA

By author/editor
Billing, ErikLamb, Maurice
By organisation
School of InformaticsInformatics Research EnvironmentSchool of Engineering ScienceVirtual Engineering Research Environment
Interaction Technologies

Search outside of DiVA

GoogleGoogle Scholar
Total: 178 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 742 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf