EVENT

Event News

Talk by Prof. Ling Liu: "Robustness of Deep Learning Systems Against Deception"

Date:

July 18 (Thursday)

Time:

17:00-18:00

Venue:

National Institute of Informatics 12F (conference room #1208)
2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo

Speaker:

Ling Liu
Professor in the School of Computer Science at Georgia Institute of Technology

Prof. Dr. Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large-scale data intensive systems. Prof. Liu is an internationally recognized expert in the areas of Big Data Systems and Analytics, Distributed Systems, Database and Storage Systems, Internet Computing, Privacy, Security and Trust. Prof. Liu has published over 300 international journal and conference articles, and is a recipient of the best paper award from a number of top venues, including ICDCS 2003, WWW 2004, 2005 Pat Goldberg Memorial Best Paper Award, IEEE CLOUD 2012, IEEE ICWS 2013, ACM/IEEE CCGrid 2015, IEEE Edge 2017. Prof. Liu is an elected IEEE Fellow and a recipient of IEEE Computer Society Technical Achievement Award. Prof. Liu has served as general chair and PC chairs of numerous IEEE and ACM conferences in the fields of big data, cloud computing, data engineering, distributed computing, very large databases, World Wide Web, and served as the editor in chief of IEEE Transactions on Services Computing from 2013-2016. Currently Prof. Liu is co-PC chair of The Web 2019 (WWW 2019) and the Editor in Chief of ACM Transactions on Internet Technology (TOIT). Prof. Liu's research is primarily sponsored by NSF, IBM and Intel.

Program

Opening remarks
Masaru Kitsuregawa
Director General, National Institute of Informatics

Robustness of Deep Learning Systems Against Deception
Ling Liu
Professor in the School of Computer Science at Georgia Institute of Technology

We are entering an exciting era where human intelligence is being enhanced by big data fueled artificial intelligence (AI) and machine learning (ML). However, recent work shows that DNN models trained privately are vulnerable to adversarial inputs. Such adversarial inputs inject small amount of perturbations to the input data to fool machine learning models to misbehave, turning a deep neural network against itself. As new defense methods are proposed, more sophisticated attack algorithms are surfaced. This arms race has been ongoing since the rise of adversarial machine learning. This talk provides a comprehensive analysis and characterization of the state of art attacks and defenses. As more mission critical systems are incorporating machine learning and AI as an essential component in our social, cyber, and physical systems, such as Internet of things, self-driving cars, smart planets, smart manufacturing, understanding and ensuring the verifiable robustness of deep learning becomes a pressing challenge. This includes (1) the development of formal metrics to quantitatively evaluate and measure the robustness of a DNN prediction with respect of intentional and unintentional artifacts and deceptions, (2) the comprehensive understanding of the blind spots and the invariants in the DNN trained models and the DNN training process, and (3) the statistical measurement of trust and distrust that we can place on a deep learning algorithm to perform reliably and truthfully. In this talk, I will use our cross-layer strategic teaming defense framework and techniques to illustrate the feasibility of ensuring robust deep learning through scenario-based empirical analysis.

Capacity:

100 persons

If you wish to participate, please fill out the application form shown below. If you send the application form from our website, you will receive via e-mail the participation certificates, which should be printed out and presented with you at the reception on the day of the lecture.

Conference fee:

free

Application form:

https://reg.nii.ac.jp/m?f=466

Contact:

Planning Team, National Institute of Informatics
nii-lec (at) nii.ac.jp

entry3788

SPECIAL