The EU InSecTT project wrapped up in the Fall of 2023, only a few months before the promulgation of the European AI Act. This timing is particularly significant given the notable parallels between the research conducted by CEA-Leti researchers and the issues addressed in this new regulation.
A lack of regulation to evaluate the reliability of AI
The AI Act takes note of the prodigious rise of artificial intelligence, from generative AI such as ChatGPT to embedded AI (industrial robots, production equipment, automotive, home automation, smart cities...). It lays down rules to ensure that this wave of innovation goes hand-in-hand with transparency, data governance and respect for fundamental rights.
As a result, AI could be banned in applications where the risks are considered unacceptable, such as remote biometric identification in public spaces. The text also paves the way for future certification frameworks for the safety and security of AI systems.
However, there are currently no protocols, standards or norms for assessing the reliability of an embedded AI system.
“For the past ten years, R&D on AI security has focused on the algorithmic aspects and not on hardware or software implementations such as a microcontroller or System-On-Chip,” explains Pierre-Alain Moëllic, InSecTT Coordinator for CEA-Leti.
The recent work by CEA-Leti researchers has begun to fill this gap.
Important scientific results
The researchers did not fail at their task as they covered three types of advances and authored 11 scientific publications, one of which won a Best Paper Award.
Their studies first demonstrated the technical feasibility of authenticating embedded AI systems. For instance, if a chain of industrial robots with embedded AI ceases to function, the chain manager needs to be able to verify that the robots have not been hacked and that their data and models are correct.
CEA-Leti successfully overcame this challenge thanks to a technological platform it developed in collaboration with IRT Nanoelec. Known as HistoTrust, this platform integrates blockchain technology and secure hardware modules to guarantee the authenticity of AI systems as close to the robots as possible.
Highlighting flaws in security
The second breakthrough for CEA-Leti was to demonstrate security flaws in embedded AI systems with regard to two types of physical attacks: electromagnetic eavesdropping and fault injection. These attacks can have serious consequences such as altering system performance or reverse engineering the AI model, a major risk as this technology often represents a large part of an industrial system's value.
Finally, the researchers developed methods to characterize vulnerabilities as well as security mechanisms, which are currently being fine-tuned.
“These innovative methods may then be offered to industry, providing companies with cutting-edge solutions that enhance their technological capabilities and competitive advantage” says Marion Andrillat, Head of Cybersecurity Industrial Partnerships at CEA-Leti.