Welcome to my personal website! I am Zhiyuan Liu (Chinese: 刘知远), a self-driven undergraduate student majoring in Software Engineering at Harbin Institute of Technology (HIT).
My research interest centers around the fundamental question: How can we make large models more efficient, more generalizable, and more accessible? As the size and complexity of modern models continue to grow, so do their training and inference costs. This has created a pressing need to rethink how we construct and use large models—from data representation to computational design. I am especially motivated by the challenge of enhancing large model capabilities without relying solely on scale, but instead through methodological innovation.
To this end, I have explored several key directions including inference acceleration, data distillation, and model compression.
🔥 News
- 2025.07: 🎉 We released the paper “The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs”.
- 2025.06: 🤗 Started Research Intern at Shanghai Artificial Intelligence Laboratory, focusing on AI Security.
- 2025.06: 🎉 We released the paper “Accelerating Diffusion Large Language Models with SlowFast Sampling”.
- 2025.05: 🎉 We released the paper “dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching”.
- 2025.02: 🎉 Paper “Dataset Distillation with Neural Characteristic Function: A Minmax Perspective” was accepted by CVPR 2025 (Rating: 5/5/5). Thanks!
- 2025.01: 🤗 We release an open-sourse repo “Awesome Dataset Reduction”, which collects recent awesome dataset reduction papers! Feel free to contribute your suggestions!
- 2024.08: 🤗 Started Research Intern at EPIC Lab, Shanghai Jiao Tong University, focusing on Efficient Inference Methods, Dataset Distillation and Knowledge Distillation.
📝 Publications (* denotes the equal contribution.)

dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching
Zhiyuan Liu *, Yicun Yang *, Yaojie Zhang, Junjie Chen, Chang Zou, Qingyan Wei, Shaobo Wang, Linfeng Zhang.
- Pioneered dLLM-Cache, a novel approach to accelerate diffusion large language models (dLLMs) by leveraging adaptive caching techniques.
- dLLM-Cache achieves up to 9.1x speedup over standard dLLM pipelines, with no performance loss on most tasks.
- Project

Dataset Distillation with Neural Characteristic Function: A Minmax Perspective
Shaobo Wang, Yicun Yang, Zhiyuan Liu, Chenghao Sun, Xuming Hu, Conghui He, Linfeng Zhang
- Pioneered NCFM, a novel dataset distillation approach that reframes the problem from a Characteristic Function perspective. This innovative approach casts dataset distillation within a min-max framework.
- NCFM achieves state-of-the-art performance while drastically reducing GPU memory requirements to 1/300th of prior leading methods and accelerating training by 20x.
- Project

Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles
Qingyan Wei, Yaojie Zhang, Zhiyuan Liu, Dongrui Liu, Linfeng Zhang.

The Devil Behind the Mask: An Emergent Safety Vulnerability of Diffusion LLMs
Zichen Wen, Jiashu Qu, Dongrui Liu, Zhiyuan Liu, Chaochao Lu, Jing Shao, Conghui He, Linfeng Zhang, et al.
🎖 Honors and Awards
- 2023.12 National Scholarship (Top 1%), Ministry of Education of China
- 2023.11 Huawei Smart Base Scholarship (Top 1%), Huawei Technologies Co., Ltd.
📖 Educations
- 2022.08 - 2026.06, Harbin Institute of Technology, Bachelor of Software Engineering
💻 Internships
-
2024.08 - Present, Research Intern, EPIC Lab, Shanghai Jiao Tong University (Advised by Prof. Linfeng Zhang).
-
2025.06 - Present, Research Intern, Shanghai Artificial Intelligence Laboratory (Advised by Prof. Jing Shao).