Explainable Deep Learning for Keystroke Authentication
Abstract
In recent years, researchers have been exploring the usage of Siamese Neural Network (SNN) architectures for use in keystroke dynamics. One such model, TypeNet, achieves state-of-the-art performance for keystroke dynamics–based authentication. If how TypeNet accomplishes classification cannot be explained, user trust will be low, and subsequently, usage of the model will be low. In order to address these concerns, we have created a new methodology for model explainability, and evaluated it on the TypeNet model architecture using the Clarkson II dataset. Our method is two part; firstly, it focuses on finding the most impactful features for the model, then focuses on finding which indices of the output embedding are most impacted by each feature per input digraph. These findings contribute to explaining embedding generation using the TypeNet model, and this methodology could be adapted to many different architectures due to its flexible nature.