Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic

Читать онлайн.
Название Artificial Intelligence and Quantum Computing for Advanced Wireless Networks
Автор произведения Savo G. Glisic
Жанр Программы
Серия
Издательство Программы
Год выпуска 0
isbn 9781119790310



Скачать книгу

in Proc. ICML Workshop Hum. Interpretability Mach. Learn., 2016, pp. 96–100.

      24 24 Krening, S., Harrison, B., Feigh, K.M. et al. (2016). Learning from explanations using sentiment and advice in RL. IEEE Trans. Cogn. Develop. Syst. 9 (1): 44–55.

      25 25 A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 5188–5196.

      26 26 T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2013, pp. 3111–3119.

      27 27 G. Ras, M. van Gerven, and P. Haselager. (2018). “Explanation methods in deep learning: Users, values, concerns and challenges.” [Online]. Available: https://arxiv.org/abs/1803.07517

      28 28 A. Santoro, D. Raposo, D.G.T. Barret, et al. (2017). “A simple neural network module for relational reasoning.” [Online]. Available: https://arxiv.org/abs/1706.01427

      29 29 R. B. Palm, U. Paquet, and O. Winther. (2017). “Recurrent relational networks for complex relational reasoning.” [Online]. Available: https://arxiv.org/abs/1711.08028

      30 30 Y. Dong, H. Su, J. Zhu, and B. Zhang, “Improving interpretability of deep neural networks with semantic information,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Mar. 2017, pp. 4306–4314.

      31 31 C. Louizos, U. Shalit, J. M. Mooij, D. Sontag, R. Zemel, and M. Welling, “Causal effect inference with deep latent‐variable models,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2017, pp. 6446–6456.

      32 32 O. Goudet, D. Kalainathan, P. Caillou, et al. (2017). “Learning functional causal models with generative neural networks.” [Online]. Available: https://arxiv.org/abs/1709.05321

      33 33 C. Yang, A. Rangarajan, and S. Ranka. (2018). “Global model interpretation via recursive partitioning.” [Online]. Available: https://arxiv.org/abs/1802.04253

      34 34 M. A. Valenzuela‐Escárcega, A. Nagesh, and M. Surdeanu. (2018). “Lightly‐supervised representation learning with global interpretability.” [Online]. Available: https://arxiv.org/abs/1805.11545

      35 35 A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune, “Synthesizing the preferred inputs for neurons in neural networks via deep generator networks,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2016, pp. 3387–3395.

      36 36 D. Erhan, A. Courville, and Y. Bengio, “Understanding representations learned in deep architectures,” Dept. d'Informatique Recherche Operationnelle, Univ. Montreal, Montreal, QC, Canada, Tech. Rep. 1355, 2010

      37 37 M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why should I trust you?’ Explaining the predictions of any classifier,” 22nd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2016, pp. 1135–1144

      38 38 M. T. Ribeiro, S. Singh, and C. Guestrin, “Anchors: High‐precision model‐agnostic explanations,” in Proc. AAAI Conf. Artif. Intell., 2018, pp. 1–9.

      39 39 J. Lei, M. G'Sell, A. Rinaldo, R. J. Tibshirani, and L. Wasserman, “Distribution‐free predictive inference for regression,” J. Amer. Stat. Assoc., to be published. [Online]. Available: http://www.stat.cmu.edu/~ryantibs/papers/conformal.pdf

      40 40 Baehrens, D., Schroeter, T., Harmeling, S. et al. (2010). How to explain individual classification decisions. J. Mach. Learn. Res. 11 (6): 1803–1831.

      41 41 K. Simonyan, A. Vedaldi, and A. Zisserman. (2013). “Deep inside convolutional networks: Visualising image classification models and saliency maps.” [Online]. Available: https://arxiv.org/abs/1312.6034

      42 42 M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Proc. Eur. Conf. Comput. Vis. Zurich, Switzerland: Springer, 2014, pp. 818–833.

      43 43 B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and O. Torralba, “Learning deep features for discriminative localization,” IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 2921–2929.

      44 44 M. Sundararajan, A. Taly, and Q. Yan. (2017). “Axiomatic attribution for deep networks.” [Online]. Available: https://arxiv.org/abs/1703.01365

      45 45 D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg. (2017). “SmoothGrad: Removing noise by adding noise.” [Online]. Available: https://arxiv.org/abs/1706.03825

      46 46 Robnik‐Šikonja, M. and Kononenko, I. (2008). Explaining classifications for individual instances. IEEE Trans. Knowl. Data Eng. 20 (5): 589–600.

      47 47 Montavon, G., Lapuschkin, S., Binder, A. et al. (2017). Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recog. 65: 211–222.

      48 48 S. Bach, A. Binder, K.‐R. Müller, and W. Samek, “Controlling explanatory heatmap resolution and semantics via decomposition depth,” IEEE Int. Conf. Image Process. (ICIP), Sep. 2016, pp. 2271–2275.

      49 49 R. Fong and A. Vedaldi. (2017). “Interpretable explanations of black boxes by meaningful perturbation.” [Online]. Available: https://arxiv.org/abs/1704.03296

      50 50 P. Dabkowski and Y. Gal, “Real time image saliency for black box classifiers,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6970–6979.

      51 51 P.‐J. Kindermans, K.T. Schütt, M. Alber, et al., “Learning how to explain neural networks: PatternNet and patternAttribution,” in Proc. Int. Conf. Learn. Represent., 2018, pp. 1–16. Accessed: Jun. 6, 2018. [Online]. Available: https://openreview.net/forum?id=Hkn7CBaTW

      52 52 A. Shrikumar, P. Greenside, A. Shcherbina, and A. Kundaje. (2016). “Not just a black box: Interpretable deep learning by propagating activation differences.” [Online]. Available: http://arxiv.org/abs/1605.01713

      53 53 A. Ross, M. C. Hughes, and F. Doshi‐Velez, “Right for the right reasons: Training differentiable models by constraining their explanations,” in Proc. Int. Joint Conf. Artif. Intell., 2017, pp. 2662–2670.

      54 54 S. M. Lundberg and S. I. Lee, “A unified approach to interpreting model predictions,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 4768–4777.

      55 55 R. Guidotti, A. Monreale, S. Ruggieri, D. Pedreschi, F. Turini, and F. Giannotti. (2018). “Local rule‐based explanations of black box decision systems.” [Online]. Available: https://arxiv.org/abs/1805.10820

      56 56 D. Linsley, D. Scheibler, S. Eberhardt, and T. Serre. (2018). “Globaland‐local attention networks for visual recognition.” [Online]. Available: https://arxiv.org/abs/1805.08819

      57 57 S. Seo, J. Huang, H. Yang, and Y. Liu, “Interpretable convolutional neural networks with dual local and global attention for review rating prediction,” in Proc. 11th ACM Conf. Recommender Syst. (RecSys), 2017, pp. 297–305.

      58 58 C. Molnar. (2018). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Accessed: Jun. 6, 2018. [Online]. Available: https://christophm.github.io/interpretable‐ml‐book

      59 59 O. Bastani, C. Kim, and H. Bastani. (2017).