

<?xml version="1.0" encoding="UTF-8"?>
<record>
  <title>Visualization of Explanations of Incremental Models</title>
  <journal>Journal of Intelligent Computing</journal>
  <author>Jaka DemÅ¡ar, Zoran Bosni, Igor Kononenko</author>
  <volume>10</volume>
  <issue>4</issue>
  <year>2019</year>
  <doi>https://doi.org/10.6025/jic/2019/10/4/121-127</doi>
  <url>http://www.dline.info/jic/fulltext/v10n4/jicv10n4_1.pdf</url>
  <abstract>The temporal dimension that is ever more prevalent in data makes the data stream mining (incremental
learning) an important field of machine learning. In addition to accurate predictions, explanations of models and examples are a crucial component as they provide insight into modelâ€™s decision and lessen its black box nature, thus increasing the userâ€™s trust. Proper visual representation of data is also very relevant to userâ€™s understanding â€” visualization is often utilised in machine learning since it shifts the balance between perception and cognition to take fuller advantage of the brainâ€™s abilities. In this paper we review visualisation in incremental setting and devise an improved version of an existing visualisation
of explanations of incremental models. We discuss the detection of concept drift in data streams and experiment with a novel detection method that uses the stream of modelâ€™s explanations to determine the places of change in the data domain. </abstract>
</record>
