Rapid technical advancements have led to dramatically improved abilities for artificial agents, and thus opened up for new ways of cooperation between humans and them, from disembodied agents such as Siris to virtual avatars, robot companions, and autonomous vehicles. It is therefore relevant to study not only how to maintain appropriate cooperation, but also where the responsibility for this resides and/or may be affected. While there are previous organisations and categorisations of agents and HAI research into taxonomies, situations with highly responsible artificial agents are rarely covered. Here, we propose a way to categorise agents in terms of such responsibility and agent autonomy, which covers the range of cooperation from humans getting help from agents to humans providing help for the agents. In the resulting diagram presented in this paper, it is possible to relate different kinds of agents with other taxonomies and typical properties. A particular advantage of this taxonomy is that it highlights under what conditions certain effects known to modulate the relationship between agents (such as the protégé effect or the "we"-feeling) arise.