MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents

Luyuan Wang1,
Yongyu Deng2,
Yiwei Zha3,
Guodong Mao3,
Qinmin Wang1,
Tianchen Min3,
Wei Chen1,
Shoufa Chen4
1Carnegie Mellon University, 2University of Michigan, 3Northeastern University,
4The University of Hong Kong
Architecture Overview

Overview of the MobileAgentBench architecture.

Abstract

Large language model (LLM)-based mobile agents are increasingly popular due to their capability to interact directly with mobile phone Graphic User Interfaces (GUIs) and their potential to autonomously manage daily tasks. Despite their promising prospects in both academic and industrial sectors, little research has focused on benchmarking the performance of existing mobile agents, due to the inexhaustible states of apps and the vague definition of feasible action sequences. To address this challenge, we propose an efficient and user-friendly benchmark, MobileAgentBench, designed to alleviate the burden of extensive manual testing. We initially define 100 tasks across 10 open-source apps, categorized by multiple levels of difficulty. Subsequently, we evaluate several existing mobile agents, including AppAgent and MobileAgent, to thoroughly and systematically compare their performance. All materials are accessible on our project webpage, contributing to the advancement of both academic and industrial fields.