ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01904
17
9

ImAiR: Airwriting Recognition framework using Image Representation of IMU Signals

4 May 2022
Ayush Tripathi
A. Mondal
Lalan Kumar
Prathosh A.P.
ArXivPDFHTML
Abstract

The problem of Airwriting Recognition is focused on identifying letters written by movement of finger in free space. It is a type of gesture recognition where the dictionary corresponds to letters in a specific language. In particular, airwriting recognition using sensor data from wrist-worn devices can be used as a medium of user input for applications in Human-Computer Interaction (HCI). Recognition of in-air trajectories using such wrist-worn devices is limited in literature and forms the basis of the current work. In this paper, we propose an airwriting recognition framework by first encoding the time-series data obtained from a wearable Inertial Measurement Unit (IMU) on the wrist as images and then utilizing deep learning-based models for identifying the written alphabets. The signals recorded from 3-axis accelerometer and gyroscope in IMU are encoded as images using different techniques such as Self Similarity Matrix (SSM), Gramian Angular Field (GAF) and Markov Transition Field (MTF) to form two sets of 3-channel images. These are then fed to two separate classification models and letter prediction is made based on an average of the class conditional probabilities obtained from the two models. Several standard model architectures for image classification such as variants of ResNet, DenseNet, VGGNet, AlexNet and GoogleNet have been utilized. Experiments performed on two publicly available datasets demonstrate the efficacy of the proposed strategy. The code for our implementation will be made available at https://github.com/ayushayt/ImAiR.

View on arXiv
Comments on this paper