Dongyang Li (李东洋)

Hi! 👋😋. I am currently a 3rd-year M.S. student in NCClab@SUSTech, major in Electronic Engineering at Southern University of Science and Technology, Shenzhen, China. My supervisor is Prof. Quanying Liu. I have previously closely worked with Prof. Chao Zhang. Now I am also closely working with Dr. Shaoli Huang.

My major research interests include using Multimodal Large Language models (MLLMs) to build Brain-Computer Interface (BCI), and then to promote the development of Embodied AI and NeuroAI, etc.

My mission is to architect the next generation of General AI by synthesizing the principles of brain intelligence with the power of MLLMs. I aim to decode the neural foundations of perception and action, translating them into robust physical and linguistic intelligence, thereby enabling machines that truly understand and collaborate with humanity .

🎯 Actively applying for Fall 2026 PhD programs - Feel free to reach out!

Feel free to contact me by email if you are interested in discussing or collaborating with me.

Email  /  Google Scholar  /  Github  /  LinkedIn  /  X  /  CV

profile photo

News

- [12/2025] One journal paper is accepted to Scientific Data. [Paper] [Code]
- [12/2025] Won 1st place in Speech Detection (Standard Track) at NeurIPS 2025 LibriBrain Competition. [Code]
- [10/2025] Joined AGIBot as Intern Researcher. Started doing research on humanoid robot motion generation.
- [07/2025] One paper is accepted to ACM MM 2025. [Oral] [Paper] [Code]
- [05/2025] Joined Shanghai AI Laboratory as Intern Researcher. Started doing research on speech decoding.
- [12/2024] Invited talk at NeurIPS Workshop in Department of Statistics and Data Science, SUSTech. [Slide]
- [09/2024] One paper is accepted to NeurIPS 2024. [Poster] [Paper] [Code]
- [08/2024] Give an oral presentation on HBAI workshop at IJCAI 2024. [Slide]
- [06/2024] One paper is accepted to a Workshop of Human Brain and Artificial Intelligence (HBAI) at IJCAI 2024. [HBAI]

Selected Publications

BrainFLORA: Uncovering Brain Concept Representation via Multimodal Neural Embeddings
Dongyang Li*, Haoyang Qin*,   Mingyang Wu,   Chen Wei,   Quanying Liu  
ACM MM, 2025, Oral
Project page / arXiv / Github

We present BrainFLORA, the first framework that aligns EEG, MEG and fMRI into a shared neural embedding with a multimodal universal projector, setting a new bar for cross-subject visual multi-task decoding while uncovering the brain's hidden map between concepts and real-world objects.

Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion
Dongyang Li*, Chen Wei*,   Shiying Li,   Jiachen Zou,   Quanying Liu  
NeurIPS, 2024, Poster
Project page / arXiv / Github

We introduce the first zero-shot EEG-to-image reconstruction pipeline: with our brain encoder and two-stage guideddiffusion we can directly reconstruct what you're seeing from cheap, millisecond-level brainwaves—setting new state-of-the-art benchmarks in classification, retrieval and generation and making everyday EEG-based visual decoding finally practical.

Education

Southern University of Science and Technology

Sep 2023 - Jun 2026 (expected)

M.Sc. in Electronic Science and Technology

Zhengzhou University

Sep 2019 - Jun 2023

B.Eng. in Computer Science and Technology

Experience

AGIBot

Oct 2025 - Present

Research Intern
Multimodal reliable humanoid locomotion generation for robotics.
Supervisor: Dr. Shaoli Huang

Shanghai AI Laboratory

May 2025 - Oct 2025

Research Intern
Non invasive brain-computer interface based on multimodal models.
Supervisor: Prof. Chao Zhang

Academic Services

Conference Reviewer for ICLR, ICML, NeurIPS, ACL, KDD, AAAI.


Webpage templete is borrowed from this