ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.16915
221
0
v1v2v3 (latest)

Reinforcement-Guided Hyper-Heuristic Hyperparameter Optimization for Fair and Explainable Spiking Neural Network-Based Financial Fraud Detection

23 August 2025
Sadman Mohammad Nasif
Md Abrar Jahin
M. F. Mridha
ArXiv (abs)PDFHTMLGithub
Main:10 Pages
Bibliography:4 Pages
Abstract

The growing adoption of home banking systems has increased cyberfraud risks, requiring detection models that are accurate, fair, and explainable. While AI methods show promise, they face challenges including computational inefficiency, limited interpretability of spiking neural networks (SNNs), and instability in reinforcement learning (RL)-based hyperparameter optimization. We propose a framework combining a Cortical Spiking Network with Population Coding (CSNPC) and a Reinforcement-Guided Hyper-Heuristic Optimizer (RHOSS). CSNPC leverages population coding for robust classification, while RHOSS applies Q-learning to adaptively select low-level heuristics under fairness and recall constraints. Integrated within the MoSSTI framework, the system incorporates explainable AI via saliency maps and spike activity profiling. Evaluated on the Bank Account Fraud (BAF) dataset, the model achieves 90.8% recall at 5% false positive rate, outperforming prior spiking and classical models while maintaining over 98% predictive equality across demographic groups. Although RHOSS introduces offline optimization cost, it is amortized at deployment. The sparse architecture of CSNPC further reduces energy consumption compared to dense ANNs. Results demonstrate that combining population-coded SNNs with RL-guided hyper-heuristics enables fair, interpretable, and high-performance fraud detection.

View on arXiv
Comments on this paper