Lantmäteriet and DIGG jointly perform a governmental assignment to test new technology for automatization in the public sector. The aim is to find a model or a conceptual solution on how to build trust in automatization with AI and with other new technology such as blockchain technology.
AI Sweden now invites partners to take part in information and to contribute to the results. The Trust Model is inspired by quality and eco labeling, by model cards and from the different existing guidelines for AI ethics. After the presentation we invite participants to discuss and give feedback on the conclusions.
The seminar will be held by Mats Snäll, Lantmäteriet.
Since AI has a so-called black box drawback, meaning it is not always possible to explain how the AI models come to conclusions, questions about ethics and morals have been raised. With the demand for "explainable AI" there is a risk that automation and improvement driven by new technology stops or is hindered.
The trust model, that will be presented on this seminar, is intended to describe and report on an AI application's ability and knowledge to perform a task and through this serve as a quality stamp that provides clarity and transparency to authorities' automated measures. It can be seen as a kind of Nordic Ecolabel (Svanenmärkning) of the Swedish public sector AI models.
This seminar is for partners only