Hi there! π I am Yaxin Luo.
About Me
Decorative animated background

Hello! I am a First-Year Machine Learning PhD student at MBZUAI, advised by Prof. Zhiqiang Shen. I am also closely working with my friend Xiaofu Chen. My research vision centers on advancing Native Multimodal Foundation Modelsthat to unify understanding, generateion reasoning, planning and action across diverse modalities.. Ambitiously, in the future, this type of models could be extended in both digital and real worlds. I am also interested in bridging the gap between high-performance unified intelligence and computational efficiency.
Previously, I earned my Bachelorβs degree from Technical University of Denmark, where I was fortunate to be supervised by Prof. Dim P. Papadopoulos. Meanwhile, I was lucky to collarating with Dr.Gen Luo and Prof.Rongrong Ji on efficient deep learning researches during my bachelor.
More about my earlier journey...
I spent an intense and rewarding year at the University of Edinburgh studying pure mathematics and physicsβan experience that sparked my passion for science and technology, deepened my curiosity about the unknown, I was curious and wanted to explore String Theory at that time, this one year ultimately shaped who I am today. Before Edinburgh, while enrolled in a Bio-Medicine program at the University of Queensland and preparing for the UCAT test to be addimitted into the university's medical school, I failed at the end. As I only focused on managing a high-street multi-brand boutique which was located in Brisbane's Southbank near the casino, and was far more focused on business than on study and research; that Edinburgh year changed my priorities and set me on a research path, thanks to the advice, encourage and supports of my academic personal tutor Prof.Ana Rita Pires when I was at Edinburgh. Anyway, all those past experiences have made me who I am today.
My research interests focus on:
- Unified Multimodal Foundation Models : Developing native multimodal foundation models that perform unified understanding, generation, reasoning, planning and action across multi-modalities. I aim to construct a universal interface where diverse modalities converge, enabling models to perceive complex real-world dynamics and generate coherent, high-fidelity multimodal contents.
- Efficient Foundation Models : Tackling the efficiency challenges in scaling unified models. I explore novel architectures and mechanisms to maximize performance-per-compute.
Recently, I am focusing on Unified Multimodal Foundation Models post training and flow matching modeling speed up.
Experience
Research Intern, Meituan LongCat Team
Working on Unified Multimodal Foundation Model pretraining for unified embeddding/latent space.
- Training a unified model with strong agentic capability, alongside generation and editing of long-horizon interactive videos and images.
- Focusing on tokenization design and large-scale pretraining strategies for cross-modal unification.
Research Assistant, MBZUAI
Advised by Prof. Zhiqiang Shen at the VILA Lab.
- Investigated language-pretraining-induced bias as a strong foundation for general vision tasks, showing LLM priors transfer to pure-vision learning β published in TMLR 2026.
- Explored reasoning and agentic behaviors in multimodal large language models (MLLMs), leading the OpenCaptchaWorld benchmark (NeurIPS 2025).
News
[2026-02-10] π Next-Gen CAPTCHAs is now available on arXiv! A defense framework leveraging cognitive gaps against MLLM-based GUI agents.
[2025-09-18] π OpenCaptchaWorld has been accepted by NeurIPS 2025.
Selected Publications
( * indicate equal contribution)
For full and up-to-date publication list, please refer to my Google Scholar page.

Next-Gen CAPTCHAs: Leveraging the Cognitive Gap for Scalable and Diverse GUI-Agent Defense
arXiv 2026
Jiacheng Liu *, Yaxin Luo *, Jiacheng Cui, Xinyi Shang, Xiaohan Zhao, Zhiqiang Shen
π Paper π» Code π Demo π Project
arXiv 2026
Jiacheng Liu *, Yaxin Luo *, Jiacheng Cui, Xinyi Shang, Xiaohan Zhao, Zhiqiang Shen
π Paper π» Code π Demo π Project

OpenCaptchaWorld: A Comprehensive Web-based Platform for Testing and Benchmarking Multimodal LLM Agents
NeurIPS 2025
Yaxin Luo *, Zhaoyi Li *, Jiacheng Liu, Jiacheng Cui, Xiaohan Zhao, Zhiqiang Shen
π Paper π» Code π Demo
NeurIPS 2025
Yaxin Luo *, Zhaoyi Li *, Jiacheng Liu, Jiacheng Cui, Xiaohan Zhao, Zhiqiang Shen
π Paper π» Code π Demo

APL: Anchor-Based Prompt Learning for One-Stage Weakly Supervised Referring Expression Comprehension
ECCV 2024
Yaxin Luo, Jiayi Ji, Xiaofu Chen, Yuxin Zhang, Tianhe Ren, Gen Luo
π Paper π» Code
ECCV 2024
Yaxin Luo, Jiayi Ji, Xiaofu Chen, Yuxin Zhang, Tianhe Ren, Gen Luo
π Paper π» Code

Ξ³-MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models
ICLR 2025
Yaxin Luo, Gen Luo, Jiayi Ji, Yiyi Zhou, Xiaoshuai Sun, Zhiqiang Shen, Rongrong Ji
π Paper π» Code
ICLR 2025
Yaxin Luo, Gen Luo, Jiayi Ji, Yiyi Zhou, Xiaoshuai Sun, Zhiqiang Shen, Rongrong Ji
π Paper π» Code