![liqing.jpg](/assets/img/liqing.jpg?76c8416eca455f7ffd8e21160ff4b91f)
Qing Li 李庆
Email: dylan.liqing[at]
gmail[dot]
com
I am a research scientist and team lead at Beijing Institute for General Artificial Intelligence (BIGAI), China. I received my Ph.D. in 2022 from Department of Statistics at University of California, Los Angeles (UCLA), advised by Professor Song-Chun Zhu. During my Ph.D., I have interned at Google Research, Microsoft Azure AI and Amazon Alexa. Before UCLA, I obtained my degrees of Bachelor in 2015 and Master in 2018 from University of Science and Technology of China (USTC).
My research interests lie in the intersection of machine learning, computer vision, cognition, and robotics. My current research themes include:
- Multimodal Understanding: vision & language understanding, visual reasoning, 3D scene understanding, video understanding
- General Machine Learning: neural-symbolic reasoning and learning, structure learning, representation learning, generative modeling, few-shot learning
- Embodied Agents: language-grounded task planning, reinforcement learning, robotics
Our team is actively recruiting full-time research scientists, engineers, and self-motivated interns. We are also seeking prospective PhD students and long-term collaborators for TongProgam (通计划). Feel free to contact me if you are interested!
News
2024-07 | 🔥🔥🔥 Three papers are accepted by ECCV 2024! Check out these awesome works: PQ3D, a unfied model for 3D vision-language understanding; SceneVerse, the first million-scale 3D vision-language dataset; VideoAgent, a LLM agent that understands videos by using a structured memory and 4 tools. |
---|---|
2024-06 | Call for papers to IJCLR 2024, which will happen on 20 - 22 September 2024 in Nanjing! I will serve as an area chair on neuro-symbolic learning and reasoning. If you are willing to be a PC member, please contact me! |
2024-06 | INSIGHT is selected as ICML 2024 Spotlight (top 3.5%)! |
2024-05 | Check out our new work PQ3D, the first unfied model capable of handling a wide range of 3D-VL tasks! The code and models will be released soon. Stay tuned! |
2024-05 | Two papers are accepted by ICML 2024. They are about 3D embodied generalist agent (LEO) and end-to-end neural-symbolic RL for explainable decision-making (INSIGHT). Congrats Jiangyong Huang et al. and Lirui Luo et al.! |