Pilot study for federated language models in Swedish
AI Sweden joined the National Library of Sweden and Scaleout Systems for a pilot on federatively trained language models. This was the first federative, large-scale modeling of artificial neural networks for language comprehension in Sweden and one of the first examples worldwide. The potential impact is substantial as it would enable more actors to use large, existing datasets without the data ever leaving the point where it originated - thus solving pressing challenges around data sharing and privacy. The pilot could also be a first, important step towards a shared Scandinavian language model.
The digital collections at the National Library are the largest and most advanced ones in existence for the Swedish language today. They are used for some of the most successful work with large language models, including the widely used Swedish language model KB-BERT. This pilot study allowed the National Library to combine its own data with text resources from other national libraries. As a first step, data from the Norwegian National Library will be included and thereafter potentially be expanded to Denmark and Finland as well as the Swedish University Libraries. Moreover, it will give other actors in Sweden the opportunity to train and benchmark large language models as well.
Centrally training data requires large amounts of data to be transferred while fulfilling complicated technical and legal requirements. It is for example difficult for the National Library to share their data outside of their own organization. Federated learning allows algorithms to be sent out to the site of where the data originates to train the data there instead. As a result, the data does not leave the originating site at the library. Knowledge and insights - instead of raw data - are aggregated centrally.