Streaming weak submodularity: Interpreting neural networks on the fly

Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi

Research output: Contribution to journalConference articlepeer-review


In many machine learning applications, it is important to explain the predictions of a black-box classifier. For example, why does a deep neural network assign an image to a particular class? We cast interpretability of black-box classifiers as a combinatorial maximization problem and propose an efficient streaming algorithm to solve it subject to cardinality constraints. By extending ideas from Badanidiyuru et al. [2014], we provide a constant factor approximation guarantee for our algorithm in the case of random stream order and a weakly submodular objective function. This is the first such theoretical guarantee for this general class of functions, and we also show that no such algorithm exists for a worst case stream order. Our algorithm obtains similar explanations of Inception V3 predictions 10 times faster than the state-of-the-art LIME framework of Ribeiro et al. [2016].

Original languageEnglish
Pages (from-to)4045-4055
Number of pages11
JournalAdvances in Neural Information Processing Systems
StatePublished - 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017

Bibliographical note

Funding Information:
This research has been supported by NSF Grants CCF 1344364, 1407278, 1422549, 1618689, ARO YIP W911NF-14-1-0258, ISF Grant 1357/16, Google Faculty Research Award, and DARPA Young Faculty Award (D16AP00046).

Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.


Dive into the research topics of 'Streaming weak submodularity: Interpreting neural networks on the fly'. Together they form a unique fingerprint.

Cite this