InvestorsHub Logo
Followers 28
Posts 7358
Boards Moderated 1
Alias Born 09/13/2010

Re: None

Wednesday, 05/01/2019 2:29:31 AM

Wednesday, May 01, 2019 2:29:31 AM

Post# of 10460
Why should you trust my interpretation? Understanding uncertainty in LIME predictions

Hui Fen (Sarah)Tan, Kuangyan Song, Madeilene Udell, Yiming Sun, Yujia Zhang
(Submitted on 29 Apr 2019)

Methods for interpreting machine learning black-box models increase the outcomes' transparency and in turn generates insight into the reliability and fairness of the algorithms. However, the interpretations themselves could contain significant uncertainty that undermines the trust in the outcomes and raises concern about the model's reliability. Focusing on the method "Local Interpretable Model-agnostic Explanations" (LIME), we demonstrate the presence of two sources of uncertainty, namely the randomness in its sampling procedure and the variation of interpretation quality across different input data points. Such uncertainty is present even in models with high training and test accuracy. We apply LIME to synthetic data and two public data sets, text classification in 20 Newsgroup and recidivism risk-scoring in COMPAS, to support our argument.

https://arxiv.org/abs/1904.12991


Very interesting proofing!

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.