The talk starts at 12:15.
Please note that due to COVID-19, the participants can watch the streamed talk on Teams with a link (below).
Speaker: Lars Henry Berge Olsen
Location: Teams: Click here to join the meeting
Title: Explaining predictive models with mixed features using Shapley values and variational autoencoders
Abstract:
Shapley values originated in cooperative game theory but are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models. Shapley values have desirable theoretical properties and a sound mathematical foundation which makes them suited to explain black-box models. Precise Shapley values for dependent data rely on accurate modeling of the dependencies between all feature combinations, which grows exponentially with the number of features. We propose to use a variational autoencoder with arbitrary conditioning (VAEAC) to model all feature dependencies simultaneously. We demonstrate through comprehensive simulation studies that VAEAC outperforms the state-of-the-art methods for a wide range of settings for both dependent continuous and mixed data. Finally, we apply VAEAC on a real data set from the UCI Machine Learning Repository.