This paper introduces a framework for systematic complexity scaling of deep neural network(DNN) based MIMO detectors. The model uses a fraction of the DNN inputs by scaling their values through weights that follow monotonically non-increasing functions. This allows for weight scaling across and within the different DNN layers in order to achieve accuracy-vs.-complexity scalability during inference. In order to further improve the performance of our proposal, we introduce a sparsity-inducing regularization constraint in conjunction with trainable weight-scaling functions. In this way, the network learns to balance detection accuracy versus complexity while also increasing robustness to changes in the activation patterns, leading to further improvement in the detection accuracy and BER performance at the same inference complexity. Numerical results show that our approach is 10-foldand 100-fold less complex than classical approaches based on semi-definite relaxation and ML detection, respectively.
View on arXiv