目录

人工智能安全

Introduction

The past several years have witnessed the rapid development of Deep Learning technology. Various DL models today are widely adopted in many scenarios, e.g., image classification, speech recognition, language processing, robotics control. These applications significantly enhance the quality of life. However, new security threats are introduced to DNN models including backdoor attacks, adversarial attacks, model extraction attacks, privacy inference attacks, etc. It is critical to protect these DNN models against existing or potential integrity and privacy attacks, especially in safety-critical fields such as autonomous driving and smart medical care. Our team aims to promote the academic research and industrial practice of artificial intelligence security, and explore new theories, new methods and new techniques for artificial intelligence security and privacy protection.

News

🎉 Jun. 2022: One paper accepted by TCSVT (CCF B)! Congrats to Xiaoxuan!

🎉 May 2022: One paper accepted by TBD! Congrats to Biwen and Honghong!

🎉 Apr. 2022: One paper accepted by TOMM (CCF B)! Congrats to Honghong!

🎉 Apr. 2022: One paper accepted by NAACL (CSL@CQU A)! Congrats!

🎉 Jan. 2022: two papers accepted by ICLR (one spotlight CSL@CQU A+ and one poster CSL@CQU A)! Congrats to Xiaoxuan and Kangjie!

🎉 Jan. 2022: our paper “EGM: An Efficient Generative Model for Unrestricted Adversarial Examples” accepted by TOSN (CCF B). Congrats to Hangcheng!

🎉 Sep. 2021: one paper accepted by TCSVT (CCF B)!

🎉 Aug. 2021: one paper accepted by TCSVT (CCF B)!

🎉 Jul. 2021: one paper accepted by ACM MM (CCF A)! Congrats to Ying!

Grants

  • 国家自然科学基金青年基金项目:基于系统特征的去中心化联邦学习拜占庭容错研究,2022年-2024年
  • 国家自然科学基金青年基金项目,面向可迁移的跨模态对抗样本生成,2022年-2024年
  • 重庆市自然科学基金面上项目:面向去中心化联邦学习的拜占庭攻击与防御研究,2021年10月-2023年9月
  • 重庆市自然科学基金面上项目,基于对抗学习的小样本图像生成方法,2021年-2024年
  • 中国博士后科学基金面上项目,面向小样本的高质量可迁移对抗样本生成方法研究,2020年-2023年

AI Robustness

AI Privacy

AI Security in Distributed Systems

NLP Security