Grad Seminar - Empowering Security Analysts: A User-Centric Approach to Explainable AI for Insider Threat Detection
Abstract
Recent increases in personal and technological information leaks, predominantly sourced from insiders within organizations, highlight the urgent need to address this issue. A proposed solution involves leveraging machine learning techniques, such as recurrent neural networks, to detect abnormal behavior indicative of insider threats. Initial experiments with a recurrent neural network-based autoencoder have shown promise in effectively identifying and preventing such threats. Ongoing research focuses on enhancing the interpretability and trustworthiness of the system through the integration of explainability graphs and user testing with security analysts, aiming to refine the model for seamless adoption in Security Operations Centers.
Presenter
Abdul Muqtadir Abbasi, PhD candidate in Systems Design Engineering
Attend in-person or on Microsoft Teams:
https://teams.microsoft.com/l/meetup-join/19%3ameeting_Y2QzNzdlODAtYmE2MS00ZjczLTg3ZDEtZGJlNTZiZDY3Njlk%40thread.v2/0?context=%7b%22Tid%22%3a%22723a5a87-f39a-4a22-9247-3fc240c01396%22%2c%22Oid%22%3a%22d455d920-5436-4266-862f-df9c81fe2143%22%7d