We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Global optimization of objective functions represented by ReLU networks.
- Authors
Strong, Christopher A.; Wu, Haoze; Zeljić, Aleksandar; Julian, Kyle D.; Katz, Guy; Barrett, Clark; Kochenderfer, Mykel J.
- Abstract
Neural networks can learn complex, non-convex functions, and it is challenging to guarantee their correct behavior in safety-critical contexts. Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures. Verification algorithms address this need and provide formal guarantees about a neural network by answering "yes or no" questions. For example, they can answer whether a violation exists within certain bounds. However, individual "yes or no" questions cannot answer qualitative questions such as "what is the largest error within these bounds"; the answers to these lie in the domain of optimization. Therefore, we propose strategies to extend existing verifiers to perform optimization and find: (i) the most extreme failure in a given input region and (ii) the minimum input perturbation required to cause a failure. A naive approach using a bisection search with an off-the-shelf verifier results in many expensive and overlapping calls to the verifier. Instead, we propose an approach that tightly integrates the optimization process into the verification procedure, achieving better runtime performance than the naive approach. We evaluate our approach implemented as an extension of Marabou, a state-of-the-art neural network verifier, and compare its performance with the bisection approach and MIPVerify, an optimization-based verifier. We observe complementary performance between our extension of Marabou and MIPVerify.
- Subjects
GLOBAL optimization
- Publication
Machine Learning, 2023, Vol 112, Issue 10, p3685
- ISSN
0885-6125
- Publication type
Article
- DOI
10.1007/s10994-021-06050-2