A number of checklists and guidelines for trustworthy and ethical AI have been produced by both public decision-making bodies, industry and academic institution, in an attempt to address ethical aspects with AI. However, implementing these guidelines and putting them into practice often proves to be a challenge. The AI Sweden Ethics Lab was launched with the aim to help move AI ethics in Sweden from abstract guidelines to practical application.
The focus for the AI Ethics Lab discussions is not on how ethical issues are or should be regulated, but on which ethical aspects and additional perspectives that should be considered in the implementation of the AI solution.
The first meeting at the end of May 2021 resulted in a five strategic insights that will help guide future efforts in supporting the implementation of ethical AI.
Purpose is fundamental but not everything
The purpose and intended use of the AI solution is a strong guiding factor as to how to decide on ethical issues. Tracking young people’s use of gaming apps might be laudable if the intention is to alert the system to overuse, but not if the end goal is to encourage addiction. However, in reality it is not always feasible to constantly keep the purpose in mind during the development process when multiple smaller, but important, ethical-oriented decisions are made by a larger team.
A multi-disciplinary approach is needed
Diverse teams are often mentioned as a solution to discovering unwanted bias that a more homogenous team might not capture or test for. However, the discussions here showed that although diversity based on personal attributes is important, such as gender balance in AI developing teams, we need to shift our focus to disciplinary diversity. Bringing in for example behavioral social scientists or sociologists in AI development will have a direct effect. Not only for understanding of how algorithms may produce undesirable results, but for a wider understanding of the effect on society as a whole. What are the behavioral shifts triggered by the use of algorithm-based solutions by new users, target groups or in a new context?
Knowledge gap between management and developers
In order to make strategic decisions about how and when to use AI, ethics need to be considered and well understood. Consequently, knowledge and awareness about AI and ethics needs to be present in organizations also on a management level. But this understanding still needs to be grounded in the reality of AI developing teams for it to be effective. A good understanding of AI will be a prerequisite for this and should be encouraged.
Getting the AI developing community on board is key
Ethics need to be a factor when developing AI solutions. However, in reality, pragmatism and beating state of the art often takes precedence. Simply put, there is a tradeoff between performance and ethics. Developing teams should have the support and incentives needed to navigate these tradeoffs. Getting the AI community on board with the importance of ethics will be key for organizations to dare to challenge the current state of often favoring performance.
We need to work on both a short-term and a long-term timeline
AI ethics is a huge field. The legal perspective, policies, and standards aside, there are still many issues to address in order to get one step closer to implementing AI ethics in practice. It is clear from the discussions that we need to entertain several timelines at the same time. Certain topics, such as introducing non-technical competence into AI development teams to diversify the disciplinary knowledge will take time. Still, AI developers are already today confronted with ethical dilemmas on a daily basis and getting access to the right tools and support structures is fundamental and should be made a priority right here, right now. These two perspectives are not mutually exclusive and should both be addressed, although with different timelines and target groups in mind.
Read more about the AI Ethics Lab and its members here.