What does fairness mean? How is fairness perceived? How can fairness be implemented in AI?
Data-based decision-making systems are increasingly affecting people’s lives. Such systems decide whose credit loan application is approved, who is invited for a job interview and who is accepted into university. This raises the difficult question of how to design these systems, so that they are compatible with fairness and justice norms. Clearly, this is not simply a technical question - the design of such systems requires an understanding of the social context of these applications and requires us to think about philosophical questions.
Our team therefore combines expertise from the fields of philosophy, computer science and economy to tackle this question. The aim of our interdisciplinary project is the development of a methodology for the design of fair AI applications. This methodology will help guide stakeholders when considering the usage of AI in a socially responsible way, and will make it possible to train software developers in ethical topics.
|Oct, 2021||As part of this year’s Swiss Digital on November 10, we will be at the opening of “Nüü”. There we’ll show one of the clips on “(un)fair algorithms” that we’re currently producing with help of the Swiss production company Tristesse and you’ll have the chance to test an algorithmic hiring tool yourself.|
|Sep, 2021||Corinna will be part of a panel discussion on the chances and risks of AI on September 6, 2021. The discussion will take place in Winterthur and is organized by Digital Winterthur. You can sign up here.|
|Jun, 2021||Corinna will speak at the UCL workshop “Fairness and Diversity in Statistical Science” on July 6, 2021.|
|Jun, 2021||We’re excited to share that our paper “A Systematic Approach to Group Fairness in Automated Decision Making” just won the prize for the best paper in ethical, legal, and social issues (ELSI) of data science at this year’s Swiss Conference on Data Science.|
|Jun, 2021||Christoph was a guest on the podcast “Hans wie Heiri”, which is produced by the “Foundation against Racism and Antisemitism - Stiftung gegen Rassismus und Antisemitismus (GRA)” and “Gesellschaft Minderheiten in der Schweiz (GMS)” and focuses on human rights, democracy and anti-racism. Listen to the episode “Rassismus vorprogrammiert? - Chancen und Risiken von Algorithmen” (in German) on Spotify, Apple Podcasts, Google Podcasts or on GRA’s website.|
|May, 2021||Corinna will present our paper “A Systematic Approach to Group Fairness in Automated Decision Making” at the 8th Swiss Conference on Data Science (SDS 2021) on June 9th.|
|Mar, 2021||Christoph spoke about algorithmic fairness at the Naturwissenschaftliche Gesellschaft Winterthur. You can find his talk here (in German).|
|Mar, 2021||We’re very honoured that our paper “On the Moral Justification of Statistical Parity” just won one of the two Best Student Paper awards at this year’s FAccT!|
|Dec, 2020||We started the reading group “(Interdisciplinary) Ethics of Algorithms” at DSI! We always meet on the first Tuesday of the month. You can find more information here. Come by and say hi!|
|Dec, 2020||Our paper “On the Moral Justification of Statistical Parity” was accepted to this year’s FAccT!|