Enroll in CSCE 492/892-001: Deep Learning and Assured Autonomy Analysis

Enroll in CSCE 990-003: Deep Learning and Assured Autonomy Analysis.
Enroll in CSCE 990-003: Deep Learning and Assured Autonomy Analysis.

CSCE 492/892-001: Deep Learning and Assured Autonomy Analysis
Instructor: Tran, Dung
Lecture
Location: Military and Naval Science Building
Room: TBA
Days: TR
Time: 9:30–10:45 AM

Over the last two decades, there has been a blossom in artificial intelligence (AI), with implications reaching everywhere from healthcare, marketing, banking, gaming to the automotive industry. It is not an exaggeration to claim that AI will significantly benefit and affect many aspects of human life now and in the future. Although AI is powerful and performs even better than humans on many complicated tasks, it has stimulated a longstanding debate for many years between researchers, tech companies, and lawmakers as to whether we can bet human lives on AI?

To be able to use AI in safety-critical applications, there is an urgent need for methods that can prove the safety of AI systems. Conventional methods for demonstrating the safety of AI systems using extensive simulation and rigorous testing are usually costly and incomplete. For example, to achieve the catastrophic failure rates of less than one per hour, autonomous vehicle systems need to perform billions of miles of test- driving. More importantly, these driving tests do not cover all corner-cases that may arise in the field.

Consequently, new approaches based on formal methods, safe planning and synthesis, and robust learning are urgently needed for not only proving but also enhancing the safety and reliability of AI systems. In principle, these new approaches can automatically explore all unforeseen scenarios when verifying or falsifying the safety of AI systems. They also can generate provably correct planning decisions, safe control actions, and improve the robustness of AI systems under uncertain scenarios and adversarial attacks.

As an essential step to tackle these challenging problems, this course focuses on understanding state-of-art techniques and tools for safety and robust verification of deep neural networks and autonomous systems with learning-enabled components. Since assured autonomy and safe AI are coming, graduate students can benefit from this course by catching one of the most active research directions in diverse communities such as AI, security, software engineer, and verification. Students are expected to learn (and possibly reimplement) some novel ideas for analyzing safety and robustness deep neural networks and learning-based autonomous systems via a collection of papers and tools from big groups.

The course will cover four parts:
1) Safety and Robustness Verification of Deep Neural Networks, including Feedforward Neural Networks (FNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and Semantic Segmentation Networks (SSNs). Students will learn some novel methods and tool from the following groups:
• Reluplex and Marabou approach from Stanford University
• Crown approach from MIT and UCLA
• MILP approach from Imperial College of London
• ERAN approach from ETH Zurich
• NNV approach from Vanderbilt University
• ReluVal and Neurify approach from Columbia University
2) Safety Verification of Neural-Network-Based Control Systems where we will learn some novel methods and tools from the following groups
• The Verisig approach from the University of Pennsylvania
• The NNV approach from Vanderbilt University
• The Sherlock approach from the University of Colorado (Boulder)
• The ReachNN approach from Northwestern University and Boston University
• The SMC approach from the University of California, Irvine
3) Testing and Falsification Methods
• The VerifAI approach from the University of California, Berkeley
• The simulation-based testing approach from Arizona State University and Toyota Research
4) Discussion on promising research directions