AI Ethics Lab
Clinical medicine has historically overlooked gender, racial and geographic diversity. This means that the data that is currently used for training algorithms used for medical diagnosis risks being prejudiced, and that the AI solutions even reinforce this bias over time. For example, skin cancer-detecting algorithms risk being less accurate when used on dark-skinned patients, since they are trained on fair-skinned images.
In an attempt to address ethical aspects such as this, a number of checklists and guidelines for trustworthy and ethical AI have been produced by both public decision-making bodies, industry and academic institutions. However, implementing these guidelines and putting them into practice often proves to be a challenge. AI Sweden is therefore launching the Swedish AI Ethics Lab with the aim to help move AI ethics in Sweden from abstract guidelines to practical application.
The members that have been appointed as part of this group are as follows
Anna Nordell Westling, Sana Labs
Daniel Akenine, Microsoft
Evelina Anttila, Peltarion
Helena Thybell, Save the Children
Katarina Gidlund, Mid Sweden University
Louise Callenberg, PublicInsight
Magnus Boman, KTH Royal Institute of Technology and Karolinska Institutet
Martin Engström, Region Halland
Sara Övreby, Google
Stefan Larsson, Lund University
Theodor Andersson, Agency for Digital Government
The purpose of the Swedish AI Ethics Lab is to provide guidance and support in implementing ethics in AI development.
AI Ethics Lab is initiated by AI Sweden and the members have been appointed by the AI Sweden steering committee.