In the recently published Proceedings of the 54th Hawaii International Conference on System Sciences (HICSS), prenode’s co-CEO Robin Hirt together with a team of researchers explores the possibilities of reusing neural nets on distributed data sets. The re-use of machine learning models has a large potential but not every transfer is successful. Indicators can help to estimate the transferability of neural nets.
In collaboration with the co-authors Akash Srivastava, Carlos Berg and Niklas Kühl, the team performs an empirical study on a use case involving different restaurants’ sales data. Neural networks are trained across restaurants’ data sets and transferred using transfer machine learning. The results suggest that indicators like net similarity can be used to estimate model performance.
The models’ scores show that transfer learning can lead to a significant improvement in the performance of a machine learning system. The training of individual models on several, distributed data sets and a following model transfer is therefore a useful way to enhance a system’s utility.
In addition, the team states that pre-trained models can benefit from the training on another data set, as this “outperforms the model built solely on the original distribution” (Hirt et al., 2021).
prenode’s Decentralized Machine Learning software mlx applies this technology and leverages an approach where training data stays on-the-edge. Therefore, only the individual, locally trained models are exchanged to centrally build a combined model. Find out more about our solution on the mlx page.
Further information on the research in the paper at hawaii.edu.
Did we spark you interest? Get in touch with us!