Speeding up Federated Learning Convergence using Transfer Learning
The recent advances in machine learning based on deep neural networks, coupled with the availability of phenomenal storage capacity, are transforming the industrial landscape. However, these novel machine learning approaches are known to be data hungry, as they need to tune a huge number of parameters in order to perform well. As more and more AI based applications are being deployed to learn from personal data, privacy concerns are rising, and more specifically on sensible domains like medicine, finance or mobile related data. With the ubiquitous availability of cloud-based solutions at a very low price, privacy has now become even more sensitive.
To overcome these issues, collaborative frameworks such as Federated Learning (FL) recently emerged and are accepted as realistic and adoptable solutions by healthcare practitioners. In a FL setup, actors locally learn a model on their private data and share the model only to a server in charge of aggregating the extracted knowledge.
If the first proofs of concept show very promising results, some challenges still remain in the medical domain where population drift from one hospital to the other is an identified phenomenon, and where the data dimensionality makes local knowledge extraction difficult.