Responsible design and use of AI applications

Artificial intelligence (AI) is revolutionising our everyday lives - both professionally and privately - at breathtaking speed. Applications such as chatbots, voice assistants and generative AI offer the potential to support or even automate processes and decisions and create simplified IT user interfaces. This results in more efficient workflows, improved process quality and the simplification of routine activities.

However, these technological advances are also accompanied by challenges. A lack of transparency in the underlying processes and the output of AI ("black box AI"), discrimination by algorithms and increasing digital gaps between users and non-users can hinder the spread of AI. However, the ever-increasing performance of AI algorithms is also placing the responsible design of AI applications more and more centre stage.

This field of research examines from a user-centred perspective how AI can be designed and used for the benefit of all stakeholders involved. Key factors here are the perception and design of AI applications, privacy and trust in AI applications and their output. 

  • AI-based decision support 
  • Information perception and trust in AI applications
  • AI applications and privacy

Example projects in this research field

  • A Multi-perspective Assessment of Channel-related Unfairness in Voice Assistants (SNSF project funding)
  • Algorithmic Management - Establishing Fair and Participative Shift Planning in Healthcare (funding as part of the digitalisation strategy of the University of Bern, together with Prof. Dr Philipp Baumann)

Selected publications

  • Weith, H.; Matt, C. (2023): Information Provision Measures for Voice Agent Product Recommendations — The Effect of Process Explanations and Process Visualizations on Fairness Perceptions, Electronic Markets (33:1), 57, DOI: 10.1007/s12525-023-00668-x.
  • Ebrahimi, S.; Matt, C. (2023): Not Seeing the (Moral) Forest for the Trees? How Task Complexity and Employees’ Expertise Affect Moral Disengagement with Discriminatory Data Analytics Recommendations, Journal of Information Technology, DOI: 10.1177/02683962231181148.
  • Lüthi, N.; Matt, C.; Myrach, T.; Junglas, I. (2023): Augmented Intelligence, Augmented Responsibility?, Business & Information Systems Engineering (65:4)pp. 391-401, DOI: 10.1007/s12599-023-00789-9.
  • Weiler, S.; Matt, C.; Hess, T. (2022): Immunizing with Information – Inoculation Messages Against Conversational Agents’ Response Failures, Electronic Markets (32), pp. 239-258, DOI: 10.1007/s12525-021-00509-9.
  • Lüthi, N.; Matt, C.; Myrach, T. (2021): A Value-Sensitive Design Approach to Minimize Value Tensions in Software-based Risk-Assessment Instruments, Journal of Decision Systems (30:2-3), pp. 194-214. 10.1080/12460125.2020.1859744