You are here : Home > Congratulations to Kevin Hector for his Best Paper award at the SECAI 2023 workshop!

News | Headlines | Success | Nomination | Profile | Micro-nanotechnologies

Congratulations to Kevin Hector for his Best Paper award at the SECAI 2023 workshop!


​​​​Integrity and confidentiality of embedded neural networks – Tackling security threats in the physical dimension

Published on 26 July 2024

Neural networks are widely used across a host of electronic components & embedded systems, notably for object recognition or language processing. While some systems are public, other secure networks may be targeted by attacks seeking to clone them to either spoof them or replicate their performance. Preventing this threat is a major security issue, in order to protect the functionality and intellectual property of these innovations. Not all cyberattacks are conducted remotely. Other methods exist, including physical attacks.​

After a two-year preparatory course in Physics, Technology and Engineering Sciences (PTSI) at Lycée Marie Curie, Kevin studied at the ESTIA engineering school in Biarritz. In his final year, he majored in robotics and embedded systems, earning a double degree from the University of Salford-Manchester. Building on this experience, he began a PhD in the "Systèmes et Architectures Sécurisés" (SAS)​ joint research team operated by CEA-Leti and the Ecole des Mines in St-Étienne.​


Kevin identified how some attackers could steal information an embedded neural networks deployed​ on microcontrollers by injecting laser faults. Using this method, they could be able to illicitly recoversome of the targeted neural network’s internal parameters approximately 80% of their bits) and, then, use this information to efficiently train a substitute model mimicking the victim model’s performance. CEA is currently researching countermeasures to fix this kind of vulnerability. ​​


“When we talk about cybersecurity, the stereotype of an attacker remotely hacking a computer system immediately springs to mind. However, our expertise focuses on a less well-known aspect of  security of ​AI: hardware security. We work directly at the physical level, interacting with microelectronic components to protect Machine Learning models against these forms of attack, which in the past have not been given adequate consideration.”​


Kevin hopes to continue his work in this field, with the goal of making Machine Learning models more secure.​​

Thanks​​​: Pierre-Alain Moellic, Jean-Max Dutertre, Mathieu Dumont


Visit our thesis offers portal ​

access to the portal

Top page