Artificial Partners is a public lecture series hosted by the CVBE group of the chair of philosophy of mind. The series addresses various topics on human interaction with machines and artificial intelligence.
The upcoming talk will be held by Matthias Uhl (Technical University Munich, TUM) on February 6th, from 18:00 to 19:30 at LMU Munich Center for Ethics, Room M210.
Matthias Uhl: How are artificial agents perceived ? An experimental approach
"I will present experimental designs and preliminary findings from our research program on the perception of artificial agents. This perception arguably informs the meta-ethical discussion of whether these entities should be considered as ethical actors. It also bears relevance for normative ethics, because a human’s interaction with an artificial agent is likely to influence his or her ethical behavior. From an impartial observer’s perspective, we identify a strong aversion against the delegation of ethically relevant tasks to artificial as opposed to human agents. Two potential explanations for this phenomenon will be discussed: the retribution gap and a preference for empathy. We will also look at two concrete examples where the use of artificial agents may be perceived as desirable: the case of randomization in social dilemmas and the commitment against human impulses."
The first talk in the series took place on Thursday, January 16, 6PM-7:30PM, LMU Munich Center for Ethics, Room M210.
Fleur Jongepler (Radbound University): "Do algorithms know better? First-person authority in the age of big data"
We normally think individuals are authoritative when it comes to their own desires, beliefs, intentions and values. The idea that individuals are authoritative about their own attitudes is firmly rooted in modern liberal values and structures many of our social practices, institutional arrangements, and public policies. In recent years, however, this principle of first-person authority has come under pressure by what I refer to as algorithmic authority: the idea that algorithms know us best. The opportunities afforded by aggregating big data has led to a steep rise in deferring to algorithms, both by governments and corporations. This development leads to inevitable tensions between what individuals think they want, value and intend, and what algorithms predict or say they want, value and intend, e.g. in online marketing, health and policing contexts. These tensions are reinforced by the fact that algorithms have a potentially transformative effect in shaping people’s attitudes. In this paper, I address the question of what type of authority algorithms might have, and how it differs from the type of authority individuals have about themselves. I suggest that the main difference concerns the grounds of their authority (evidence-based versus commitment-based). Subsequently, I turn to the question of what is at stake, morally speaking, when algorithms are unjustifiedly deferred to, and argue that this involves a distinct type of epistemic injustice, namely, not being treated as a self-knower.