+ - J. Hong*, J. Wang, C. Zhang, Z. LI*, B. Li, and Z. Wang
"DP-OPT: Make Large Language Model Your Differentially-Private Prompt Engineer”
International Conference on Learning Representations (ICLR), 2024. (Spotlight) [Paper] [Code]
+ - A. Jaiswal*, Z. Gan, X. Du, B. Zhang, Z. Wang, and Y. Yang
"Compressing LLMs: The Truth is Rarely Pure and Never Simple”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - Y. Jiang*, H. Tang, J. Chang, L. Song, Z. Wang, and L. Cao
"Efficient-3DiM: Learning a Generalizable Single-image Novel-view Synthesizer in One Day”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - W. Chen*, J. Wu*, Z. Wang, and B. Hanin
"Principled Architecture-aware Scaling of Hyperparameters”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - P. Wang*, S. Yang, S. Li, Z. Wang, and P. Li
"Polynomial Width is Sufficient for Set Representation with High-dimensional Features”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - X. Chen*, Y. Yang, Z. Wang, and B. Mirzasoleiman
"Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - Y. You*, R. Zhou, J. Park, H. Xu, C. Tian, Z. Wang, and Y. Shen
"Latent 3D Graph Diffusion”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - A. Isajanyan, A. Shatveryan, D. Kocharian, Z. Wang, and H. Shi
"Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community”
International Conference on Learning Representations (ICLR), 2024. (Spotlight) [Paper] [Code]
+ - S. Yu, J. Hong*, H. Zhang, H. Wang*, Z. Wang, and J. Zhou
"Safe and Robust Watermark Injection with a Single OoD Image”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
+ - D. Sow, S. Lin, Z. Wang, and Y. Liang
"Doubly Robust Instance-Reweighted Adversarial Training”
International Conference on Learning Representations (ICLR), 2024. [Paper] [Code]
- A. Jaiswal*, S. Liu*, T. Chen*, and Z. Wang
"The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter”
Advances in Neural Information Processing Systems (NeurIPS), 2023. [Paper] [Code]
- Z. Zhang*, Y. Sheng, T. Zhou, T. Chen*, L. Zheng, R. Cai*, Z. Song, Y. Tian, C. Ré, C. Barrett, Z. Wang, and B. Chen
"H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models”
Advances in Neural Information Processing Systems (NeurIPS), 2023. [Paper] [Code]
- D. Hoang*, S. Kundu, S. Liu*, and Z. Wang
"Don’t Just Prune by Magnitude! Your Mask Topology is A Secret Weapon”
Advances in Neural Information Processing Systems (NeurIPS), 2023. [Paper] [Code]
diff --git a/research.html b/research.html
index fa39ac3..ab7ea2a 100644
--- a/research.html
+++ b/research.html
@@ -224,29 +224,31 @@ Theme 3: Generative AI for 2D/3D Visual Synthesis and Editing
- Theme 4: Learning to Optimize (L2O)
+ Theme 4: Machine Learning for Good (Robustness, Privacy, Fairness, & AI4Science)
- L2O is an emerging paradigm that leverages ML to automatically develop an optimization algorithm. It demonstrates many practical benefits including faster convergence and better solution quality. Over the past five years, we have spearheaded an ever-growing line of L2O works that significantly expand both rigorous theories (L2O convergence, worst-case/average-case generalization, adaptation, uncertainty quantification, and interpretability), and practical adoption (inverse problems in computational sensing/imaging, large model training, private training, protein docking, AI for finance, among others). Please refer to the L2O Primer and Open L2O toolbox that we presented for this community.
+ As ML systems (in particular, computer vision and LLM) are influencing all facets of our daily life, it is now commonplace to see evidence on their untrustworthiness or harmful impacts in high-stake environments. We have strived to build ML algorithms that are resilient to various perturbations, attacks, biases, as well as rising challenges in privacy, fairness and ethics - as overviewed in our ML Safety Primer. We are also dedicated to collaborating closely with domain experts to advance AI4Science, particularly in the fields of biomedicine, bioinformatics, and healthcare, as well as fostering AI for Social Good (our Good Systems project)
Selected Notable Works:
- - J. Yang, T. Chen*, M. Zhu*, F. He, D. Tao, Y. Liang, and Z. Wang, "Learning to Generalize Provably in Learning to Optimize”, International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. [Paper] [Code]
- - (α-β) T. Chen*, X. Chen*, W. Chen*, H. Heaton, J. Liu, Z. Wang, and W. Yin, “Learning to Optimize: A Primer and A Benchmark”, Journal of Machine Learning Research (JMLR), 2022. [Paper] [Code]
- - W. Zheng*, T. Chen*, T. Hu*, and Z. Wang, “Symbolic Learning to Optimize: Towards Interpretability and Scalability”, International Conference on Learning Representations (ICLR), 2022. [Paper] [Code]
- - J. Liu, X. Chen*, Z. Wang, and W. Yin, “ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA”, International Conference on Learning Representations (ICLR), 2019. [Paper] [Code]
+ - J. Hong*, J. Wang, C. Zhang, Z. LI*, B. Li, and Z. Wang, "DP-OPT: Make Large Language Model Your Differentially-Private Prompt Engineer”, International Conference on Learning Representations (ICLR), 2024. (Spotlight) [Paper] [Code]
+ - G. Holste*, E. Oikonomou, B. Mortazavi, A. Coppi, K. Faridi, E. Miller, J. Forrest, R. McNamara, L. Ohno-Machado, N. Yuan, A. Gupta, D. Ouyang, H. Krumholz, Z. Wang, and R. Khera, “Severe Aortic Stenosis Detection by Deep Learning Applied to Echocardiography”, European Heart Journal (EHJ), 2023. [Paper] [Code]
+ - T. Chen*, C. Gong, D. Diaz, X. Chen*, J. Wells, Q. Liu, Z. Wang, A. Ellington, A. Dimakis, and A. Klivans, "HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing”, International Conference on Learning Representations (ICLR), 2023. [Paper] [Code]
+ - H. Wang*, C. Xiao, J. Kossaifi, Z. Yu, A. Anandkumar, and Z. Wang, “AugMax: Adversarial Composition of Random Augmentations for Robust Training”, Advances in Neural Information Processing Systems (NeurIPS), 2021. [Paper] [Code]
+ - Z. Wu*, H. Wang*, Z. Wang, H. Jin, and Z. Wang, “Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. [Paper] [Code]
- Theme 5: Machine Learning for Good (Robustness, Privacy, Fairness, & AI4Science)
+
+ Theme 5: Learning to Optimize (L2O)
- As ML systems (in particular, computer vision and LLM) are influencing all facets of our daily life, it is now commonplace to see evidence on the untrustworthiness or harmful impacts of ML systems in high-stake environments. We have strived to build ML algorithms that are resilient to various environment degradations, perturbations, adversarial attacks, and privacy threats - as overviewed in our ML Safety Primer. We are also keen on developing AI4sicnece (protein, medical image, material science), and AI for the Common Good (our Good Systems project)
+ L2O is an emerging paradigm that leverages ML to automatically develop an optimization algorithm. It demonstrates many practical benefits including faster convergence and better solution quality. Over the past five years, we have spearheaded an ever-growing line of L2O works that significantly expand both rigorous theories (L2O convergence, worst-case/average-case generalization, adaptation, uncertainty quantification, and interpretability), and practical adoption (inverse problems in computational sensing/imaging, large model training, private training, protein docking, AI for finance, among others). Please refer to the L2O Primer and Open L2O toolbox that we presented for this community.
Selected Notable Works:
- - G. Holste*, E. Oikonomou, B. Mortazavi, A. Coppi, K. Faridi, E. Miller, J. Forrest, R. McNamara, L. Ohno-Machado, N. Yuan, A. Gupta, D. Ouyang, H. Krumholz, Z. Wang, and R. Khera, “Severe Aortic Stenosis Detection by Deep Learning Applied to Echocardiography”, European Heart Journal (EHJ), 2023. [Paper] [Code]
- - T. Chen*, C. Gong, D. Diaz, X. Chen*, J. Wells, Q. Liu, Z. Wang, A. Ellington, A. Dimakis, and A. Klivans, "HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing”, International Conference on Learning Representations (ICLR), 2023. [Paper] [Code]
- - H. Wang*, C. Xiao, J. Kossaifi, Z. Yu, A. Anandkumar, and Z. Wang, “AugMax: Adversarial Composition of Random Augmentations for Robust Training”, Advances in Neural Information Processing Systems (NeurIPS), 2021. [Paper] [Code]
- - Z. Wu*, H. Wang*, Z. Wang, H. Jin, and Z. Wang, “Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. [Paper] [Code]
+ - J. Yang, T. Chen*, M. Zhu*, F. He, D. Tao, Y. Liang, and Z. Wang, "Learning to Generalize Provably in Learning to Optimize”, International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. [Paper] [Code]
+ - (α-β) T. Chen*, X. Chen*, W. Chen*, H. Heaton, J. Liu, Z. Wang, and W. Yin, “Learning to Optimize: A Primer and A Benchmark”, Journal of Machine Learning Research (JMLR), 2022. [Paper] [Code]
+ - W. Zheng*, T. Chen*, T. Hu*, and Z. Wang, “Symbolic Learning to Optimize: Towards Interpretability and Scalability”, International Conference on Learning Representations (ICLR), 2022. [Paper] [Code]
+ - J. Liu, X. Chen*, Z. Wang, and W. Yin, “ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA”, International Conference on Learning Representations (ICLR), 2019. [Paper] [Code]