This paper introduces a new approach to sound source localization using
head-related transfer function (HRTF) characteristics, which enable precise
full-sphere localization from raw data. While previous research focused
primarily on using extensive microphone arrays in the frontal plane, this
arrangement often encountered limitations in accuracy and robustness when
dealing with smaller microphone arrays. Our model proposes using both time and
frequency domain for sound source localization while utilizing Deep Learning
(DL) approach. The performance of our proposed model, surpasses the current
state-of-the-art results. Specifically, it boasts an average angular error of
0.24degreesandanaverageEuclideandistanceof0.01meters,whiletheknownstate−of−the−artgivesaverageangularerrorof19.07degreesandaverageEuclideandistanceof1.08meters.Thislevelofaccuracyisofparamountimportanceforawiderangeofapplications,includingrobotics,virtualreality,andaidingindividualswithcochlearimplants(CI).