Interpretable deep learning for healthcare

†† January 4, 2018, 11AM

302- 308




Since 2012, deep learning, or representation learning has shown impressive progress in computer vision, speech recognition, and natural language processing. The power of deep learning comes from combining expressive models with large labeled data. This allowed the machine to extract useful information from high-dimensional data, which was a human responsibility before the rise of deep learning. Massive data have been collected in healthcare since the introduction of electronic healthcare records (EHR), and the amount of data is more than human medical experts can process. It is expected that, in this regard, deep learning can play a significant role in healthcare as it did in vision and language. However, computational healthcare requires predictive models to be both accurate and interpretable. My talk will introduce how to use recurrent neural networks (RNN), one of the building blocks in deep learning, to process longitudinal EHR data and predict a future event. Specifically, I will focus on predicting a heart failure onset given a patientsí 18 months record. Building on top of this, I will address the interpretability issue of deep learning models, and propose a method to make predictions that is both accurate and interpretable.