专题研讨与学术报告

专题研讨与学术报告

当前位置: 首页 > 专题研讨与学术报告 > 正文
Adversarial Machine Learning
日期:2019-10-22 点击:

报告题目:Adversarial Machine Learning

报告时间:2019年10月24日,星期四,16:10

报告地点:数学楼2-3会议室

报告人:Fabio Roli,University of Cagliari, Italy

报告摘要:

Machine-learning algorithms are widely used for cybersecurity applications, including spam, malware detection, biometric recognition. In these applications, the learning algorithm has to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As machine learning algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted, sophisticated attacks, including test-time evasion and training-time poisoning attacks (also known as adversarial examples). This talk aims to introduce the fundamentals of adversarial machine learning by a well-structured overview of techniques to assess the vulnerability of machine-learning algorithms to adversarial attacks (both at training and test time), and some of the most effective countermeasures proposed to date. We report application examples including object recognition in images, biometric identity recognition, spam and malware detection.

 

报告人简介:

Fabio Roli is a Full Professor of Computer Engineering at the University of Cagliari, Italy, and Director of the Pattern Recognition and Applications laboratory (http://pralab.diee.unica.it/). He is partner and R&D manager of the company Pluribus One that he co-founded (https://www.pluribus-one.it ). He has been doing research on the design of pattern recognition and machine learning systems for thirty years. His current h-index is 60 according to Google Scholar (June 2019). He has been appointed Fellow of the IEEE and Fellow of the International Association for Pattern Recognition. He was a member of NATO advisory panel for Information and Communications Security, NATO Science for Peace and Security (2008 – 2011).

版权所有:西安交通大学数学与数学技术研究院  设计与制作:西安交通大学数据与信息中心
地址:陕西省西安市碑林区咸宁西路28号  邮编:710049