The massive increase in data collection worldwide calls for new ways to preserve privacy while still allowing analytics and machine learning among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data, calling for new privacy-preserving technologies (as reflected by the reports like: Data protection in the era of Artificial Intelligence).
PRIMAL aims to demonstrate the implementation of a privacy-preserving machine learning approach based on the concept of federated machine learning (FML). The goal is for advancing incrementally from TRL-4 (lab) to TRL-6 (demonstrated in relevant environment), testing it on an existing pharma-healthcare use case (Major Adverse Cardiovascular Events - MACE - prediction in patients with diabetes). The execution will be carried out with real patients' historical datasets, to demonstrate the added value with respect to state of the art in privacy and trust enhancing technologies and its efficiency, scalability and suitability to be deployed in real scenarios at a higher scale.
PRIMAL will alleviate privacy-related data sharing barriers by enabling secure privacy-preserving analytics over decentralized datasets using machine learning algorithms (specifically deep learning). Data is kept in different locations under the control of the data owners with different privacy constraints, but still secure collaborative machine learning processes are enabled without data centralisation.
PRIMAL will demonstrate an FML implementation within a real pharma-healthcare use case with real patients' datasets. The implementation will address four challenges:
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the NGI_TRUST grant agreement no 825618.
Built with Mobirise web page template