ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.08153
11
14

CAMAL: Context-Aware Multi-layer Attention framework for Lightweight Environment Invariant Visual Place Recognition

18 September 2019
Ahmad Khaliq
Shoaib Ehsan
Michael Milford
Klaus McDonald-Maier
ArXivPDFHTML
Abstract

In the last few years, Deep Convolutional Neural Networks (D-CNNs) have shown state-of-the-art (SOTA) performance for Visual Place Recognition (VPR), a pivotal component of long-term intelligent robotic vision (vision-aware localization and navigation systems). The prestigious generalization power of D-CNNs gained upon training on large scale places datasets and learned persistent image regions which are found to be robust for specific place recognition under changing conditions and camera viewpoints. However, against the computation and power intensive D-CNNs based VPR algorithms that are employed to determine the approximate location of resource-constrained mobile robots, lightweight VPR techniques are preferred. This paper presents a computation- and energy-efficient CAMAL framework that captures place-specific multi-layer convolutional attentions efficient for environment invariant-VPR. At 4x lesser power consumption, evaluating the proposed VPR framework on challenging benchmark place recognition datasets reveal better and comparable Area under Precision-Recall (AUC-PR) curves with approximately 4x improved image retrieval performance over the contemporary VPR methodologies.

View on arXiv
Comments on this paper