cPAID

Cloud-based Platform-agnostic Adversarial aI Defence framework

The cPAID project, funded by Horizon Europe, is at the forefront of enhancing the security and resilience of AI applications and operations. It is designed to research, develop, and deliver an advanced, cloud-based, platform-agnostic defense framework that provides holistic protection for AI systems against adversarial attacks and malicious actions.

As AI becomes increasingly embedded in the fabric of organizations, securing these systems from evolving threats is more critical than ever. cPAID tackles this challenge by focusing on both poisoning and evasion adversarial attacks, combining state-of-the-art AI defense methods such as life-long semi-supervised reinforcement learning, transfer learning, adversarial training, and feature reduction. These methods will be bolstered by integrating security- and privacy-by-design principles, privacy-preserving techniques, explainable AI (XAI), Generative AI, context-awareness, as well as risk and vulnerability assessments. Additionally, the project will leverage threat intelligence to enhance the protection of AI systems.

Key objectives:

  • Security- and Privacy-by-Design: Developing guidelines that ensure AI applications are built with robust security and privacy measures from the ground up.
  • Resilience Against Adversarial Attacks: Thoroughly assessing the robustness of machine learning (ML) and deep learning (DL) algorithms to withstand adversarial threats.
  • Ethical AI Compliance: Ensuring adherence to EU principles for AI ethics, promoting transparency, fairness, and accountability.
  • Real-World Validation: Testing and validating AI systems’ performance and security in real-life scenarios to guarantee their reliability.

Through its research and development efforts, cPAID aspires to establish best practices and guidelines that will drive future certification schemes. These schemes aim to ensure that AI applications and systems are certified for their robustness, security, privacy, and ethical standards, advancing the safety and trustworthiness of AI technologies across Europe and beyond.

More information here