Novel Techniques Using Graph Neural Networks (GNNS) for Anomaly Detection
Abstract
This paper explores 2 new mechanisms that leverage graphs for anomaly detection. The novelty in approach one is to leverage the global attention capability of transformer architecture using a Graph Attention Network (GAT) with Chebyshev Laplacian for representation. This method leverages the GAT to learn attention weights for the graph features obtained through
Chebyshev expansion of the Laplacian. This method focuses on capturing higher-order graph features with reduced computational complexity and utilizing attention mechanisms for improved feature relevance in detecting anomalies.
The second approach leverages Fisher information to find anomalous graphs with ChebNet module for graph analysis. The ChebNet module allows for deep learning on graphs, capturing complex patterns and relationships that can help in detecting fraud more accurately. Using Fisher information improves model interpretability while ChebNet modules help leverage
spectral properties.