Skip to main content

Responsible use of AI

AI has the potential to transform whole industries as well as society at large. It is of the highest importance that this is done in a safe, lawful, and responsible way to make sure people are not harmed directly or indirectly through applied AI solutions, while also unlocking the potential that AI brings. 

The pace of AI development is extremely high. This brings a strong need for paying close attention to the implications of adopting AI while also building trust on a societal level and over time. Each AI use case has unique risks and considerations that require context-specific approaches. We also need to support experimentation and exploration as this is where potential risks are concretized and addressed. 

Moreover, due to the speed of development, we are not yet able to foresee all possible risks and opportunities with AI. It is therefore paramount that we build a strong AI ecosystem that is capable of applying responsible principles over time. A robust AI ecosystem requires national and international collaboration, active contribution and participation of a diverse group of stakeholders and domain experts, and sharing of information and best practices.

The adoption of AI spans many fields, both technical such as data management and infrastructure, and non-technical areas such as behavioral sciences and people management. Given this, a multi-disciplinary approach throughout the lifecycle of AI development and application will increase our understanding of how AI solutions will be the most useful without producing undesirable results, as well as the effect on society as a whole. 

For organizations developing or using AI, we encourage using existing work done by organizations such as EU, NATO, OECD, and UNESCO, and would like to place additional emphasis on some specific perspectives.

Lawfulness and transparency: All AI solutions should be developed and used within the rule of the law and stakeholders should be held accountable accordingly. We strongly advocate for proportionate and reasonable transparency about how AI systems are developed and work. This involves communicating the capabilities and limitations of AI understandably to all stakeholders, including end users and citizens.

Ethical and fair: AI systems must be designed and operated in a manner that respects human rights, values, and cultural diversity. This includes mitigating risks with AI algorithms such as bias and discrimination. It is essential to consider ethical implications at every stage of AI development and deployment.

Safety and robustness: AI systems must be reliable, controllable, and secure. They should function as intended under various conditions and be resilient to manipulations and errors. This includes safeguarding against malicious use of AI technologies and ensuring that AI does not pose unintended harm to people or the environment.

Assuming an agreement on the above principles, the challenge now lies in turning them into action translating them into real-life scenarios and applications. Here, we have a responsibility to also use AI where appropriate and to address our time’s most pressing global challenges.

For more information, contact

Johanna Bergman
Johanna Bergman
Director of Strategic Development
+46 (0)73-157 05 09