ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.12350
  4. Cited By
ActionBert: Leveraging User Actions for Semantic Understanding of User
  Interfaces

ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces

22 December 2020
Zecheng He
Srinivas Sunkara
Xiaoxue Zang
Ying Xu
Lijuan Liu
Nevan Wichers
Gabriel Schubiner
Ruby B. Lee
Jindong Chen
Blaise Agüera y Arcas
ArXivPDFHTML

Papers citing "ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces"

12 / 12 papers shown
Title
VideoGUI: A Benchmark for GUI Automation from Instructional Videos
VideoGUI: A Benchmark for GUI Automation from Instructional Videos
Kevin Qinghong Lin
Linjie Li
Difei Gao
Qinchen Wu
Mingyi Yan
Zhengyuan Yang
Lijuan Wang
Mike Zheng Shou
41
10
0
14 Jun 2024
AI Assistance for UX: A Literature Review Through Human-Centered AI
AI Assistance for UX: A Literature Review Through Human-Centered AI
Yuwen Lu
Yuewen Yang
Qinyi Zhao
Chengzhi Zhang
Toby Jia-Jun Li
11
16
0
08 Feb 2024
EGFE: End-to-end Grouping of Fragmented Elements in UI Designs with
  Multimodal Learning
EGFE: End-to-end Grouping of Fragmented Elements in UI Designs with Multimodal Learning
Liuqing Chen
Yunnong Chen
Shuhong Xiao
Yaxuan Song
Lingyun Sun
Yankun Zhen
Tingting Zhou
Yan-fang Chang
41
4
0
18 Sep 2023
Video2Action: Reducing Human Interactions in Action Annotation of App
  Tutorial Videos
Video2Action: Reducing Human Interactions in Action Annotation of App Tutorial Videos
Sidong Feng
Chunyang Chen
Zhenchang Xing
24
11
0
07 Aug 2023
Android in the Wild: A Large-Scale Dataset for Android Device Control
Android in the Wild: A Large-Scale Dataset for Android Device Control
Christopher Rawles
Alice Li
Daniel Rodriguez
Oriana Riva
Timothy Lillicrap
LM&Ro
23
137
0
19 Jul 2023
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
Hiroki Furuta
Kuang-Huei Lee
Ofir Nachum
Yutaka Matsuo
Aleksandra Faust
S. Gu
Izzeddin Gur
LM&Ro
36
90
0
19 May 2023
Screen Correspondence: Mapping Interchangeable Elements between UIs
Screen Correspondence: Mapping Interchangeable Elements between UIs
Jason Wu
Amanda Swearngin
Xiaoyi Zhang
Jeffrey Nichols
Jeffrey P. Bigham
31
7
0
20 Jan 2023
UGIF: UI Grounded Instruction Following
UGIF: UI Grounded Instruction Following
S. Venkatesh
Partha P. Talukdar
S. Narayanan
16
10
0
14 Nov 2022
MUG: Interactive Multimodal Grounding on User Interfaces
MUG: Interactive Multimodal Grounding on User Interfaces
Tao Li
Gang Li
Jingjie Zheng
Purple Wang
Yang Li
LLMAG
33
8
0
29 Sep 2022
META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI
META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI
Liangtai Sun
Xingyu Chen
Lu Chen
Tianle Dai
Zichen Zhu
Kai Yu
LLMAG
16
50
0
23 May 2022
Predicting and Explaining Mobile UI Tappability with Vision Modeling and
  Saliency Analysis
Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis
E. Schoop
Xin Zhou
Gang Li
Zhourong Chen
Björn Hartmann
Yang Li
HAI
FAtt
24
32
0
05 Apr 2022
VUT: Versatile UI Transformer for Multi-Modal Multi-Task User Interface
  Modeling
VUT: Versatile UI Transformer for Multi-Modal Multi-Task User Interface Modeling
Yang Li
Gang Li
Xin Zhou
Mostafa Dehghani
A. Gritsenko
MLLM
25
34
0
10 Dec 2021
1