📝 Selected Publications
You can also find my articles on my Google Scholar profile
{=} denotes equal contribution; {*} denotes corresponding author
| TACO 2025 (Top Jour. in Computer Architecture) | Shiyuan Huang, Fangxin Liu=,*, Zongwu Wang, Ning Yang, Haomin Li, Haibing Guan, and Li Jiang MIX-PC: Enabling Efficient DNN Inference with Mixed Numeric Precision Compilation Optimization (CCF Tier A) |
| ASP-DAC 2026 (Top Conf. in Design Automation) | Zongwu Wang, Zhongyi Tang, Fangxin Liu*, Chenyang Guan, Li Jiang*, Haibing Guan TFLOP: Towards Energy-Efficient LLM Inference via An FPGA-Affinity Accelerator with Unified LUT-based Optimization (Acceptance Rate: 29%) |
| EMNLP 2025 (Top Conf. in NLP) | Fangxin Liu=, Zongwu Wang=, Jinhong Xia, Junping Zhao*, Shouren Zhao, Jinjin Li, Jian Liu, Li Jiang*, Haibing Guan FlexQuant: A Flexible and Efficient Dynamic Precision Switching Framework for LLM Quantization (Acceptance Rate: 22%) [Applied at Ant Group] |
| ICCAD 2025 (Top Conf. in Design Automation) | Yiwei Hu, Fangxin Liu=*, Zongwu Wang, Yilong Zhao, Tao Yang, Haibing Guan, and Li Jiang PLAIN: Leveraging High Internal Bandwidth in PIM for Accelerating Large Language Model Inference via Mixed-Precision Quantization (Acceptance Rate: 24%) |
| DAC 2025 (Top Conf. in Design Automation) | Zongwu Wang=, Peng Xu=, Fangxin Liu*, Yiwei Hu, Qingxiao Sun, Gezi Li, Cheng Li, Xuan Wang, Li Jiang, and Haibing Guan MILLION: MasterIng Long-Context LLM InferenceVia Outlier-Immunized KV Product OuaNtization (Acceptance Rate: 23%) [Applied at HUAWEI] |
| DAC 2025 (Top Conf. in Design Automation) | Ning Yang, Zongwu Wang*, Qingxiao Sun, Liqiang Lu, and Fangxin Liu PISA:Efficient Precision-Slice Framework forLLMs with Adaptive Numerical Type (Acceptance Rate: 23%) |
| DATE 2025 (Top Conf. in Design Automation) | Zongwu Wang, Fangxin Liu, Peng Xu, Qingxiao Sun, Junping Zhao and Li Jiang EVASION: Efficient KV Cache Compression via Product Quantization (Acceptance Rate: 21%) |
| ASP-DAC 2025 (Top Conf. in Design Automation) | Fangxin Liu=, Zongwu Wang=, Peng Xu, Shiyuan Huang and Li Jiang Exploiting Differential-Based Data Encoding for Enhanced Query Efficiency (Acceptance Rate: 28%) |
| ICCD 2024 (Import. Conf. in Computer Architecture) | Zongwu Wang=, Fangxin Liu=, and Li Jiang PS4:A Low Power SNN Accelerator with Spike Speculative Scheme (Acceptance Rate: 25%) |
| ICCD 2024 (Import. Conf. in Computer Architecture) | Longyu Zhao, Zongwu Wang, Fangxin Liu*, and Li Jiang Ninja: A Hardware Assisted System for Accelerating Nested Address Translation (Acceptance Rate: 25%) |
| MICRO 2024 (Top Conf. in Computer Architecture) | Zongwu Wang, Fangxin Liu*, Ning Yang, Shiyuan Huang, Haomin Li, and Li Jiang COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with Adaptive Spike Speculation (Acceptance Rate: 22%) |
| IEEE TPDS 2024 (Top Journal in Computer Architecture) | Fangxin Liu, Zongwu Wang, Wenbo Zhao, Ning Yang, Yongbiao Chen, Shiyuan Huang, Haomin Li, Tao Yang, Songwen Pei, Xiaoyao Liang,and Li Jiang Exploiting Temporal-Unrolled Parallelism for Energy-Efficient SNN Acceleration (CCF Tier A) |
| ISLPED 2024 (Top Conf. in Low Power Design) | Zongwu Wang, Fangxin Liu*, Longyu Zhao, Shiyuan Huang and Li Jiang LowPASS: A Low power PIM-based accelerator with Speculative Scheme for SNNs (Acceptance Rate: 21%) |
| ISCA 2024 (Top Conf. in Computer Architecture) | Yilong Zhao, Mingyu Gao, Fangxin Liu*, Yiwei Hu, Zongwu Wang, Han Lin, Ji Li, He Xian, Hanlin Dong, Tao Yang, Naifeng Jing, Xiaoyao Liang, Li Jiang UM-PIM: DRAM-based PIM with Uniform & Shared Memory Space (Acceptance Rate: 18%) |
| ICCD 2022 | Fangxin Liu, Zongwu Wang, Yongbiao Chen, Li Jiang Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs (Acceptance Rate: 24%) |
| IEEE TCAD 2022 (Top Journal in Computer-Aided Design) | Fangxin Liu, Zongwu Wang, Yongbiao Chen, Zhezhi He, Tao Yang, Xiaoyao Liang, Li Jiang SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator (CCF Tier A) |
| DATE 2022 (Top Conf. in Design Automation) | Zongwu Wang, Zhezhi He, Rui Yang, Shiquan Fan, Jie Lin, Fangxin Liu, Yueyang Jia, Chenxi Yuan, Qidong Tang, Li Jiang. Self-Terminated Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing (Best Paper Award) |
