M. Pawan Kumar
 
 

HOME

RESEARCH

PUBLICATIONS

GROUP

TALKS

TEACHING

CV

 

 

 

 

 

 

NEURAL NETWORK VERIFICATION

M. Pawan Kumar

VMCAI Winter School, 2019

In recent years, deep neural networks have been successfully employed to improve the performance of several tasks in computer vision, natural language processing and other related areas of machine learning. This has resulted in the launch of several ambitious projects where a human will be replaced by neural networks. Such projects include safety critical applications such as autonomous navigation and personalised medicine. Given the high risk of a wrong decision in such applications, a key step in the deployment of neural networks is their formal verification: proving that a neural network satisfies a desirable property, or generating a counter-example to show that it does not. This tutorial will summarise the progress made in neural network verification thus far.

The contents of the tutorial are divided in three parts. In the first part, we will formally describe a neural network and take a brief look at how its parameters are estimated using a training data set. This will allow us to establish the computational difficulty of neural network verification, as well as its practical importance. In the second part, we will look at approximate verification techniques that can be used in conjunction with the parameter estimation algorithms to improve the robustness of neural networks. In the third part, we will describe a unified framework for all the exact techniques for formal verification that have been proposed in the literature. The framework will allow us to identify the strengths and weaknesses of each approach, which in turn will highlight interesting directions of future research.

Thanks for Rudy Bunel for help with slides and references.


SLIDES

Part 0: Preliminaries   [PPT]   [PDF]

Part 1: Neural Networks   [PPT]   [PDF]

Part 2: Formulation   [PPT]   [PDF]

Part 3: Unsound Methods   [PPT]   [PDF]

Part 4: Incomplete Methods   [PPT]   [PDF]

Part 5: Complete Methods   [PPT]   [PDF]

REFERENCES FOR UNSOUND METHODS

O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori and A. Criminisi. Measuring Neural Net Robustness with Constraints.

N. Carlini, D. Wagner. Towards Evaluating the Robustness of Neural Networks.

X. Huang, M. Kwiatkowska, S. Wang and M. Wu. Safety Verification of Deep Neural Networks.

W. Ruan, X. Huang and M. Kwiatkowska. Reachability Analysis of Deep Neural Networks with Provable Guarantees.

S. Webb, T. Rainforth, Y. W. Teh and M. Pawan Kumar. A Statistical Approach to Assessing Neural Network Robustness.

REFERENCES FOR INCOMPLETE METHODS

W. Xiang, H.-D. Tran and T. Johnson. Output Reachable Set Estimation and Verification for Multi-Layer Neural Networks.

T. Gehr, M. Mirman, D. Drachsler-Coher, P. Tsankov, S. Chaudhuri and M. Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation.

M. Mirman, T. Gehr and M. Vechev. Differentiable Abstract Interpretation for Provably Robust Neural Networks.

G. Singh, T. Gehr, M. Mirman, M. Puschel and M. Vechev. Fast and Effective Robustness Certification.

E. Wong and Z. Kolter. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope.

E. Wong, F. Schmidt, J. Metzen and Z. Kolter. Scaling Provable Adversarial Defenses.

K. Dvijotham, R. Sanforth, S. Gowal, T. Mann and P. Kohli. A Dual Approach to Scalable Verification of Deep Networks.

S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T. Mann and P. Kohli. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models.

A. Raghunathan, J. Steinhardt and P. Liang. Certified Defenses Against Adversarial Examples.

A. Raghunathan, J. Steinhardt and P. Liang. Semidefinite Relaxations for Certifying Robustness to Adversarial Examples.

S. Wang, K. Pei, J. Whitehouse, J. Yang and S. Jana. Formal Security Analysis of Neural Networks using Symbolic Intervals.

S. Wang, K. Pei, J. Whitehouse, J. Yang and S. Jana. Efficient Formal Safety Analysis of Neural Networks.

T.-W. Weng, H. Zhang, H. Chen, Z. Song, C.-H. Hsieh, D. Boning, I. Dhillon and L. Daniel. Towards Fast Computation of Certified Robustness for ReLU Networks.

REFERENCES FOR COMPLETE METHODS

R. Ehlers. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks.

G. Katz, C. Barrett, D. Dill, K. Julian and M. Kochendefer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks.

W. Xiang, H.-D. Tran and T. Johnson. Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations.

C.-H. Cheng, G. Nuhrenberg and H. Ruess. Maximum Resilience of Artificial Neural Networks.

A. Lomuscio and L. Maganti. An Approach to Reachability Analysis for Feed-Forward ReLU Neural Networks.

V. Tjeng, K. Xiao and R. Tedrake. Evaluating Robustness of Neural Networks with Mixed Integer Programming.

S. Dutta, S. Jha, S. Sanakaranarayanan and A. Tiwari. Output Range Analysis for Deep Neural Networks.

R. Bunel, I. Turkaslan, P. Torr, P. Kohli and M. Pawan Kumar. A Unified View of Piecewise Linear Neural Network Verification.