ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02572
  4. Cited By

RaceVLA: VLA-based Racing Drone Navigation with Human-like Behaviour

4 March 2025
Valerii Serpiva
Artem Lykov
Artyom Myshlyaev
Muhammad Haris Khan
Ali Alridha Abdulkarim
Oleg Sautenkov
Dzmitry Tsetserukou
ArXiv (abs)PDFHTML

Papers citing "RaceVLA: VLA-based Racing Drone Navigation with Human-like Behaviour"

10 / 10 papers shown
Title
Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications
Vision-Language-Action Models for Robotics: A Review Towards Real-World ApplicationsIEEE Access (IEEE Access), 2025
Kento Kawaharazuka
Jihoon Oh
Jun Yamada
Ingmar Posner
Yuke Zhu
LM&Ro
199
19
0
08 Oct 2025
Bring the Apple, Not the Sofa: Impact of Irrelevant Context in Embodied AI Commands on VLA Models
Bring the Apple, Not the Sofa: Impact of Irrelevant Context in Embodied AI Commands on VLA Models
Daria Pugacheva
Andrey Moskalenko
Denis Shepelev
Andrey Kuznetsov
V. Shakhuro
E. Tutubalina
72
1
0
08 Oct 2025
SINGER: An Onboard Generalist Vision-Language Navigation Policy for Drones
SINGER: An Onboard Generalist Vision-Language Navigation Policy for Drones
Maximilian Adang
JunEn Low
Ola Shorinwa
Mac Schwager
LM&Ro
68
1
0
23 Sep 2025
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Pure Vision Language Action (VLA) Models: A Comprehensive Survey
Dapeng Zhang
Jin Sun
Chenghui Hu
Xiaoyan Wu
Zhenlong Yuan
R. Zhou
Fei Shen
Qingguo Zhou
LM&Ro
213
15
0
23 Sep 2025
GestOS: Advanced Hand Gesture Interpretation via Large Language Models to control Any Type of Robot
GestOS: Advanced Hand Gesture Interpretation via Large Language Models to control Any Type of Robot
Artem Lykov
Oleg Kobzarev
Dzmitry Tsetserukou
SLR
117
0
0
17 Sep 2025
VLH: Vision-Language-Haptics Foundation Model
VLH: Vision-Language-Haptics Foundation Model
Luis Francisco Moreno Fuentes
Muhammad Haris Khan
Miguel Altamirano Cabrera
Valerii Serpiva
Dmitri Iarchuk
Yara Mahmoud
Issatay Tokmurziyev
Dzmitry Tsetserukou
VLM
78
0
0
02 Aug 2025
UAV-CodeAgents: Scalable UAV Mission Planning via Multi-Agent ReAct and Vision-Language Reasoning
UAV-CodeAgents: Scalable UAV Mission Planning via Multi-Agent ReAct and Vision-Language Reasoning
Oleg Sautenkov
Malaika Zafar
Muhammad Ahsan Mustafa
Faryal Batool
Jeffrin Sam
Artem Lykov
Chih-Yung Wen
Dzmitry Tsetserukou
254
6
0
12 May 2025
Vision-Language-Action Models: Concepts, Progress, Applications and Challenges
Vision-Language-Action Models: Concepts, Progress, Applications and Challenges
Ranjan Sapkota
Yang Cao
Konstantinos I. Roumeliotis
Manoj Karkee
LM&Ro
886
37
0
07 May 2025
UAV-VLRR: Vision-Language Informed NMPC for Rapid Response in UAV Search and Rescue
UAV-VLRR: Vision-Language Informed NMPC for Rapid Response in UAV Search and Rescue
Malaika Zafar
Muhammad Ahsan Mustafa
Oleg Sautenkov
Dzmitry Tsetserukou
Valerii Serpiva
Dzmitry Tsetserukou
223
6
0
04 Mar 2025
CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs
Artem Lykov
Valerii Serpiva
Muhammad Haris Khan
Oleg Sautenkov
Artyom Myshlyaev
Grik Tadevosyan
Malaika Zafar
Dzmitry Tsetserukou
250
11
0
03 Mar 2025
1