I am currently a Researcher at Baidu Inc., working on the AI Search team. I received my Ph.D. degree from City University of Hong Kong in 2025, where I worked with Prof. Shiqi Wang. From Mar. 2024 to Sep. 2024, I was a visiting scholar at The University of Tokyo, under the supervision of Prof. Lei Ma. Before that, I received my B.E. degree in School of Computer Science from Shandong University with first class honours in 2020.
Research: I have broad interests in Efficient long-context LLM inference and Trustworthy machine learning. I am enthusiastic about understanding the internal workings of machine learning algorithms and designing tools to make them effcient, explainable, and robust.
Mis: I love backpacking and adventuring. In 2024, I set out on my first backpacking journey, traveling across Japan 🇯🇵 in two months – from the southern warmth of Okinawa to the northern beauty of Hokkaido, with stops in Kyushu and Kansai along the way. In 2025, I continued my travels through Egypt 🇪🇬 and Turkey 🇹🇷. I met incredible people from different countries and walks of life. They lifted my spirits in ways I never expected and helped me discover more about myself. I’m deeply grateful for these experience.
news
| Sep 10, 2025 | I am looking for the research interns working on LLM inference acceleration. Please contact me if you are interested via liuyibing03@baidu.com. |
| Jun 01, 2024 | One paper gets accepted in TIP 2024. This paper dicusses the feature alignment problem of the contrastive learning, and presents a high-level concept contrast approach. |
| Jan 16, 2023 | One paper gets accepted in ICLR 2024 with Spotlight presentation (Top 5%). We present neuron activation coverage (NAC) that works for both OOD detection and generalization problems. |
selected papers
(*) denotes corresponding author
-
When Privacy Meets Recovery: The Overlooked Half of Surrogate-Driven Privacy Preservation for MLLM Editing
Siyuan Xu, Yibing Liu*, Peilin Chen and 3 more authors
In The Fortieth AAAI Conference on Artificial Intelligence
Oral Presentation [Top %1] Privacy leakage in Multimodal Large Language Models (MLLMs) has long been an intractable problem. Existing studies, though effectively obscure private information in MLLMs, often overlook the evaluation of authenticity and recovery quality of user privacy. To this end, this work uniquely focuses on the critical challenge of how to restore surrogate-driven protected data in diverse MLLM scenarios. We first bridge this research gap by contributing the SPPE (Surrogate Privacy Protected Editable) dataset, which includes a wide range of privacy categories and user instructions to simulate real MLLM applications. This dataset offers protected surrogates alongside their various MLLM-edited versions, thus enabling the direct assessment of privacy recovery quality. By formulating privacy recovery as a guided generation task conditioned on complementary multimodal signals, we further introduce a unified approach that reliably reconstructs private content while preserving the fidelity of MLLM-generated edits. The experiments on both SPPE and InstructPix2Pix further show that our approach generalizes well across diverse visual content and editing tasks, achieving a strong balance between privacy protection and MLLM usability.
-
Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization
Yibing Liu, Chris Xing Tian, Haoliang Li and 2 more authors
In the 12th International Conference on Learning Representations, 2024
Spotlight Presentation [Top %5] The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i.e., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both the neuron output and its influence on model decisions. Then, to characterize the relationship between neurons and OOD issues, we introduce the neuron activation coverage (NAC) – a simple measure for neuron behaviors under InD data. Leveraging our NAC, we show that 1) InD and OOD inputs can be largely separated based on the neuron behavior, which significantly eases the OOD detection problem and beats the 21 previous methods over three benchmarks (CIFAR-10, CIFAR-100, and ImageNet-1K). 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which enables a NAC-based criterion for evaluating model robustness. Compared to prevalent InD validation criteria, we show that NAC not only can select more robust models, but also has a stronger correlation with OOD test performance.
-
Generalization Beyond Feature Alignment: Concept Activation-Guided Contrastive Learning
Yibing Liu, Chris Xing Tian, Haoliang Li and 1 more author
IEEE Transactions on Image Processing, 2024
Learning invariant representations via contrastive learning has seen state-of-the-art performance in domain generalization (DG). Despite such success, in this paper, we find that its core learning strategy – feature alignment – could heavily hinder model generalization. Drawing insights in neuron interpretability, we characterize this problem from a neuron activation view. Specifically, by treating feature elements as neuron activation states, we show that conventional alignment methods tend to deteriorate the diversity of learned invariant features, as they indiscriminately minimize all neuron activation differences. This instead ignores rich relations among neurons – many of them often identify the same visual concepts despite differing activation patterns. With this finding, we present a simple yet effective approach, Concept Contrast (CoCo), which relaxes element-wise feature alignments by contrasting high-level concepts encoded in neurons. Our CoCo performs in a plug-and-play fashion, thus it can be integrated into any contrastive method in DG. We evaluate CoCo over four canonical contrastive methods, showing that CoCo promotes the diversity of feature representations and consistently improves model generalization capability. By decoupling this success through neuron coverage analysis, we further find that CoCo potentially invokes more meaningful neurons during training, thereby improving model learning.
-
Rethinking Attention-Model Explainability through Faithfulness Violation Test
Yibing Liu, Haoliang Li, Yangyang Guo and 3 more authors
In the 39th International Conference on Machine Learning, 2022
Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading – features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attention \bigodot Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.
teaching assistant
CS4187 Computer Vision for Interactivity, 2023-2024 & 2024-2025 Semester A
CS5187 Vision and Image, 2022-2023 Semester B
CS4187 Computer Vision for Interactivity, 2022-2023 Semester A
CS4296/CS5296 Cloud Computing, 2021-2022 Semester B
CS1102 Introduction to Computer Studies, 2021-2022 Semester A
professional services
Conference Reviewer: ICLR 2025, ICML 2024, ICLR 2024, NeruIPS 2023, ICML 2022
Journal Reviewer: IEEE TPAMI, TKDE, TCYB, TCSVT, ACM ToMM
Invited PC member for short papers track at WWW 2024
honors
Institutional Research Tuition Scholarship at CityU, 2022 & 2024
Outstanding Graduate of Shandong University, 2020
The First Prize Scholarship at Shandong University (Top 5%), 2017-2019