Yihua Zhang

vanishing_me.jpg

Room 3210

428 S Shaw LN

East Lansing, Michigan

United States of America

Yihua Zhang (张逸骅) is a third-year Ph.D. student at OPTML Group at Michigan State University, under the supervision of Prof. Sijia Liu. His research centers on trustworthy and scalable machine learning (ML) algorithms for large language models (LLMs) and diffusion models (DMs), with a keen focus on bridging theoretical foundations and real-world applications. In recognition of his outstanding contributions, Yihua was honored with the prestigious MLCommons Rising Star Award in 2024. Yihua has gained valuable industry experience through internships at leading technology companies such as Meta AI, Amazon AWS AI Lab, and Cisco Research. Yihua’s work is driven by the need to develop efficient, scalable, and robust ML algorithms, with a commitment to addressing modern challenges in these domains.

Research Keywords: Machine Unlearning, Jailbreak Attack, Adversarial Training, Fairness, Parameter-Efficient Fine-Tuning, Memory-Efficient Fine-Tuning, Mixture-of-Experts, Model Sparsity, Large Language Model, Diffusion Model, Bi-Level Optimization, Zeroth-Order Optimization.

:heavy_check_mark: Theme 1: Trustworthy Foundation Models: Robustness, Fairness, and Unlearning: Yihua explores how to enhance the trustworthiness of foundation models, focusing on robustness against adversarial attacks, fairness in decision-making, and the emerging area of machine unlearning to ensure data privacy and compliance with deletion requests.

:heavy_check_mark: Theme 2: Scalable Foundation Models: Efficient Models, Data, and Algorithms: In this theme, Yihua’s work revolves around designing models that are not only powerful but also computationally efficient. His research includes advancements in model sparsification, memory-efficient fine-tuning techniques, and optimizing data usage for large-scale models.

:heavy_check_mark: Theme 3: Optimization in Modern ML: Bi-Level and Zeroth-Order Optimization This research line focuses on the theoretical underpinnings of scalable machine learning algorithms, addressing real-world constraints through bi-level optimization and zeroth-order optimization.

Collaboration Opportunities

I am always open to collaborations with researchers, as well as undergraduate and graduate students seeking Ph.D. positions. While my primary research focuses on trustworthy and scalable ML algorithms for LLMs and DMs, I am also interested in exploring a wide range of topics beyond these areas. If you have exciting research ideas or are looking for opportunities to conduct research under professional guidance, feel free to reach out to me. Please refer to my collaboration statement for more details. You are also welcome to befriend me on Wechat or connect me through LinkedIn.

News

Dec 11, 2024 :tada: Our paper Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective is accepted to AAAI 2025 !
Sep 26, 2024 :tada: One first-authored paper (UnlearnCanvas) accepted in NeurIPS 2024 Dataset & Benchmark Track!
Sep 25, 2024 :tada: Two papers accepted in NeurIPS 2024!
Sep 19, 2024 :tada: One paper accepted in EMNLP 2024! See our paper and code here!
Aug 28, 2024 :tada: I will start working as a research scientist intern at Meta AI!

First-Authored Publications

See a full publication list at here.

  1. NeurIPS’24 D&B
    UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models
    Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Kompella, Xiaoming Liu, and 1 more author
    In Thirty-eighth Conference on Neural Information Processing Systems 2024
  2. ICML’24
    Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
    Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, and 3 more authors
    In arXiv preprint arXiv:2402.11592 Feb 2024
  3. IEEE SPM
    An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning
    Yihua Zhang, Prashant Khanduri, Ioannis Tsaknakis, Yuguang Yao, Mingyi Hong, and Sijia Liu
    In arxiv 2308.00788 Aug 2023
  4. NeurIPS’23
    Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
    Yihua Zhang, Yimeng Zhang, Aochuan Chen, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Mingyi Hong, Shiyu Chang, and Sijia Liu
    In Thirty-seventh Conference on Neural Information Processing Systems Aug 2023
  5. ICCV’23
    Robust Mixture-of-Expert Training for Convolutional Neural Networks
    Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, and Sijia Liu
    In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Oct 2023
  6. ICLR’23
    What Is Missing in IRM Training and Evaluation? Challenges and Solutions
    Yihua Zhang, Pranay Sharma, Parikshit Ram, Mingyi Hong, Kush Varshney, and Sijia Liu
    In Eleventh International Conference on Learning Representations Oct 2023
  7. NeurIPS’22
    Advancing Model Pruning via Bi-level Optimization
    Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, and Sijia Liu
    In Thirty-sixth Conference on Neural Information Processing Systems Oct 2022
  8. NeurIPS’22
    Fairness Reprogramming
    Guanhua Zhang*, Yihua Zhang*, Yang Zhang, Wenqi Fan, Qing Li, Sijia Liu, and Shiyu Chang
    In Thirty-sixth Conference on Neural Information Processing Systems Oct 2022
  9. ICML’22
    Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization
    Yihua Zhang*, Guanhua Zhang*, Prashant Khanduri, Mingyi Hong, Shiyu Chang, and Sijia Liu
    In Proceedings of the 39th International Conference on Machine Learning Oct 2022
  10. CVPR’22
    Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
    Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu Chang, Sijia Liu, and Zhangyang Wang
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Oct 2022