his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Agent Autonomy and Locus of Responsibility for Team Situation Awareness
University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. (Interaction Lab)
University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. (Skövde Artificial Intelligence Lab (SAIL))ORCID iD: 0000-0003-2900-9335
University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. University of Plymouth, United Kingdom. (Interaction Lab)ORCID iD: 0000-0003-1177-4119
2017 (English)In: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM), 2017, 261-269 p.Conference paper, Published paper (Refereed)
Abstract [en]

Rapid technical advancements have led to dramatically improved abilities for artificial agents, and thus opened up for new ways of cooperation between humans and them, from disembodied agents such as Siris to virtual avatars, robot companions, and autonomous vehicles. It is therefore relevant to study not only how to maintain appropriate cooperation, but also where the responsibility for this resides and/or may be affected. While there are previous organisations and categorisations of agents and HAI research into taxonomies, situations with highly responsible artificial agents are rarely covered. Here, we propose a way to categorise agents in terms of such responsibility and agent autonomy, which covers the range of cooperation from humans getting help from agents to humans providing help for the agents. In the resulting diagram presented in this paper, it is possible to relate different kinds of agents with other taxonomies and typical properties. A particular advantage of this taxonomy is that it highlights under what conditions certain effects known to modulate the relationship between agents (such as the protégé effect or the "we"-feeling) arise.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2017. 261-269 p.
Keyword [en]
HAI, Locus of Responsibility, Agent Relationship, Classification of Artificial Agents
National Category
Interaction Technologies
Research subject
Interaction Lab (ILAB); Skövde Artificial Intelligence Lab (SAIL)
Identifiers
URN: urn:nbn:se:his:diva-14269DOI: 10.1145/3125739.3125768ISBN: 978-1-4503-5113-3 (electronic)OAI: oai:DiVA.org:his-14269DiVA: diva2:1153502
Conference
5th International Conference on Human Agent Interaction, Bielefeld, October 17-20, 2017
Projects
Dreams4Cars
Funder
EU, Horizon 2020, 731593
Available from: 2017-10-30 Created: 2017-10-30 Last updated: 2017-11-15

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Lagerstedt, ErikRiveiro, MariaThill, Serge
By organisation
School of InformaticsThe Informatics Research Centre
Interaction Technologies

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 49 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf