Skip to content

Commit

Permalink
new
Browse files Browse the repository at this point in the history
  • Loading branch information
AtlasWang committed Nov 17, 2024
1 parent 213d87a commit 7652f7d
Show file tree
Hide file tree
Showing 5 changed files with 33 additions and 14 deletions.
Binary file modified .DS_Store
Binary file not shown.
Binary file modified 123.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 22 additions & 11 deletions group.html
Original file line number Diff line number Diff line change
Expand Up @@ -725,15 +725,20 @@ <h2>Alumni</h2>


<ul>
Visitors/Interns
Long-Term Visitors (with VITA for 6 months or more)
<ul>
<li> <a href="https://simoneangarano.github.io/">Simone Angarano</a>, Ph.D. student in ML at Politecnico di Torino, Italy, visiting VITA during Aug 2023 - May 2024</li>
<li> <a href="https://luuyin.com/">Lu Yin</a>, Ph.D. student, CS@Eindhoven University of Technology (TU/e), Netherlands, visiting VITA during Aug 2022 - Jul 2023 (now Assistant Professor, CS@ University of Surrey)</li>
<li> Yee Yang Tee, Ph.D. student, EEE@Nanyang Technological University, Singapore, visiting VITA during Aug 2022 - Jan 2023 (now Co-Founder, inspections.ai)</li>
<li> <a href="https://luuyin.com/">Lu Yin</a>, Ph.D. student, CS@Eindhoven University of Technology (TU/e), Netherlands, visiting VITA during Aug 2022 - Jul 2023 (now Assistant Professor, CS@ University of Surrey, UK)</li>
<li> Yee Yang Tee, Ph.D. student, EEE@Nanyang Technological University, Singapore, visiting VITA during Aug 2022 - Jan 2023 (now Co-Founder, inspections.ai, Singapore)</li>
<li> Artur André Oiveira, Ph.D. student, CS@University of São Paulo, Brazil, visiting VITA during Dec 2021 - May 2022 (now Postdoctoral Researcher, CS@University of São Paulo) </li>
<li> Iago Breno Araujo, Ph.D. student, CS@University of São Paulo, Brazil, visiting VITA during Dec 2019 - May 2020 (now Data Scientist, FAPESP, Brazil) </li>
<li> <a href="https://williamyang1991.github.io/">Shuai Yang</a>, Ph.D. student, CS@Peking University, China, visiting VITA during Sep 2018 - Sep 2019 (now Assistant Professor, CS@Peking University)</li>
</ul>
</ul>

<br>
<ul>
Short-Term Interns (with VITA for 3 months or remotely - we now rarely take interns)
<ul>
<li>Saebyeol Shin, undergraduate, Sungkyunkwan University, South Korea, Fall 2023 [remote] (Next Move: Ph.D. student, CS@Cornell)</li>
<li>Diganta Misra, M.S. student, MILA/University of Montréal, Canada, Summer 2023 [remote] (Next Move: ELLIS Ph.D. student, MPI-IS)</li>
<li> Mukund Varma T, undergraduate, ME@IIT Madras, Summer 2022 [remote] (Next Move: Ph.D. student, CS@UCSD)</li>
Expand All @@ -748,7 +753,7 @@ <h2>Alumni</h2>
<li> Tianxin Wei, undergraduate, School of Gifted Young@USTC, Summer 2020 [remote] (Next Move: Ph.D. student, CS@UIUC)</li>
<li> Aaditya Singh, undergraduate, IIT Kanpur, Summer 2020 [remote] (Next Move: M.S. student, CS@Georgia Tech)</li>
<li> Shreeshail Hingane, undergraduate, IIT Kanpur, Summer 2020 [remote] (Next Move: Research Fellow, Microsoft Research, Bangalore)</li>
<li> Xuxi Chen, undergraduate, Statistics@USTC, Summer 2019 [remote] (Next Move: joining VITA as Ph.D. student)</li>
<li> Xuxi Chen, undergraduate, Statistics@USTC, Summer 2019 (Next Move: joining VITA as Ph.D. student)</li>
<li> Yue Wang, M.S. student, CS@Rice University, visitng VITA during Summer 2018 (Next Move: Ph.D. student, ECE@Rice University)</li>
<li> Yifan Jiang, undergraduate, EE@HUST, visitng VITA during Summer 2018 (Next Move: joining VITA as Ph.D. student)</li>
</ul>
Expand Down Expand Up @@ -779,7 +784,7 @@ <h2>Alumni</h2>


<ul>
Undergraduate RAs & Senior Design Teams
Undergraduate RAs
<ul>
<li> Kevin Wang, CS@UT Austin, undergraduate, May 2022 - May 2024 (Next Move: joining VITA as Ph.D. student) </li>
<li> Codey Sun, ECE@UT Austin, undergraduate, Aug 2023 - May 2024 (Next Move: M.S. student, EE@Stanford) </li>
Expand All @@ -793,16 +798,22 @@ <h2>Alumni</h2>
<li>Jason Zhang, ECE@UT Austin, undergraduate, Aug 2020 - May 2021 (Next Move: Ph.D. student, CS@CMU)</li>
<li>Ryan King, CSE@TAMU, undergraduate, Aug 2019 - Aug 2020 (Next Move: Ph.D. student, CSE@TAMU)</li>
<li>Josiah Coad, Math@TAMU, undergraduate, Jan 2019 - Aug 2020 (Next Move: Ph.D. student, CS@UIUC)</li>
<li>Benjamin McKenzie, CSE@TAMU, undergraduate, Jan 2020 - Aug 2020</li>
<li>Ryan Wells, CSE@TAMU, undergraduate, Aug 2018 - Aug 2019 (Next Move: Software Engineer, JP Morgan)</li>
<li>Chase Brown, CSE@TAMU, undergraduate, Summer 2018</li>
<br>
<li>UT ECE Senior Design Team (2022 - 2023): John Lu, Nathan Stern, Steven Nguyen, Mingi Hong, Nguyen Pham </li>
</ul>

</ul>

<ul>
UG Honor Thesis or Senior Design
<ul>
<li>UT ECE Senior Design Team (2022 - 2023): John Lu, Nathan Stern, Steven Nguyen, Mingi Hong, Nguyen Pham </li>
<li>UT ECE Senior Design Team (2021 - 2022): Rishabh Parekh, Kush Desai, Akarsh Kumar, Sahil Vaidya, Viraj Parikh, Malav Shah </li>
<li>UT ECE Senior Design Team (2021 - 2022): Savi Hanagud, Qingyang Hu, Cathy Le, Saaketh Rao, Jeffrey Wallace, Ming Zhao </li>
<li>UT ECE Senior Design Team (2020 - 2021): Jessica Pham, Soroush Famili, William Gu, Matt MacDonald, Ryed Ahmed <a
href="https://github.com/SeniorDesignF20/AmazonUnderstandingProductImages"> [Code] </a> <a
href="https://vimeo.com/543699826"> [Demo] </a></li> [Winner of Most Viewed Capstone Final Presentation Award]
<li>Benjamin McKenzie, CSE@TAMU, undergraduate, Jan 2020 - Aug 2020</li>
<li>Ryan Wells, CSE@TAMU, undergraduate, Aug 2018 - Aug 2019 (Next Move: Software Engineer, JP Morgan)</li>
<li>Chase Brown, CSE@TAMU, undergraduate, Summer 2018</li>
</ul>

</ul>
Expand Down
8 changes: 7 additions & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,12 @@ <h2>News</h2>
<p>
</ul>


<b style="color:rgb(68, 68, 68)">[Nov. 2024]</b>
<ul style="margin-bottom:5px">
<li> 1 Lancet Digital Health (AI cardiac imaging) accepted</li>
</ul>

<b style="color:rgb(68, 68, 68)">[Oct. 2024]</b>
<ul style="margin-bottom:5px">
<li> 1 TMLR (amortized 3D Gaussians) accepted</li>
Expand All @@ -188,7 +194,7 @@ <h2>News</h2>

<b style="color:rgb(68, 68, 68)">[Sep. 2024]</b>
<ul style="margin-bottom:5px">
<li>8 NeurIPS'24 (LightGaussian + expressive gaussian avatar + Read-ME + Found in the Middle + Large Spatial Model + transformer training dynamics + Diffusion4D + AlphaPruning) accepted </li>
<li>8 NeurIPS'24 (LightGaussian + expressive gaussian avatar + Read-ME + multi-scale RoPE + Large Spatial Model + transformer training dynamics + Diffusion4D + AlphaPruning) accepted </li>
<li>1 NeurIPS Datasets & Benchmark Track'24 (Model-GLUE) accepted </li>
<li> 1 IEEE Trans. PAMI (symbolic visual RL) accepted</li>
<li>Our group co-organized the ECCV 2024 <a href="https://dd-challenge-main.vercel.app/">"Sometimes Less is More: the 1st Dataset Distillation Challenge"</a></li>
Expand Down
6 changes: 4 additions & 2 deletions publication.html
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ <h2>Journal Paper</h2>
<div class="trend-entry d-flex">
<div class="trend-contents">
<ul>
<li>E. Oikonomou, A. Vaid, G. Holste*, A. Coppi, R. McNamara, C. Baloescu, H. Krumholz, Z. Wang, D. Apakama, G. Nadkarni, R. Khera<br> <b style="color:rgb(71, 71, 71)">“Artificial intelligence-guided detection of under-recognized cardiomyopathies on point-of-care cardiac ultrasound: a multi-center study”</b><br>Lancet Digital Health, 2024. <a href="https://www.medrxiv.org/content/10.1101/2024.03.10.24304044v2">[Paper]</a> <a href="h">[Code]</a></li>
<li>W. Zheng*, S. Sharan*, Z. Fan*, K. Wang*, Y. Xi*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Symbolic Visual Reinforcement Learning: A Scalable Framework with Object-Level Abstraction and Differentiable Expression Search”</b><br>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024. <a href="https://arxiv.org/abs/2212.14849">[Paper]</a> <a href="https://github.com/VITA-Group/DiffSES">[Code]</a></li>
<li> H. Yang*, Y. Liang, X. Guo, L. Wu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Pruning Before Training May Improve Generalization, Provably”</b><br> Journal of Machine Learning Research (JMLR), 2024. <a href="">[Paper]</a> <a href="">[Code]</a></li>
<li> H. Yang*, Z. Jiang*, R. Zhang, Y. Liang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK”</b><br> Journal of Machine Learning Research (JMLR), 2024. <a href="">[Paper]</a> <a href="">[Code]</a></li>
Expand All @@ -174,6 +175,7 @@ <h2>Journal Paper</h2>
<li> G. Holste*, E. Oikonomou, B. Mortazavi, A. Coppi, K. Faridi, E. Miller, J. Forrest, R. McNamara, L. Ohno-Machado, N. Yuan, A. Gupta, D. Ouyang, H. Krumholz, Z. Wang, and R. Khera<br> <b style="color:rgb(71, 71, 71)">“Severe Aortic Stenosis Detection by Deep Learning Applied to Echocardiography”</b><br>European Heart Journal (EHJ), 2023. <a href="https://academic.oup.com/eurheartj/advance-article/doi/10.1093/eurheartj/ehad456/7248551">[Paper]</a> <a href="https://github.com/CarDS-Yale/echo-severe-AS">[Code]</a></li>
<li> W. Zheng*, H. Yang, J. Cai, P. Wang*, X. Jiang, S. Du, Y. Wang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Integrating the Traffic Science with Representation Learning for City-Wide Network Congestion Prediction”</b><br>Elsevier Information Fusion, 2023. <a href="https://www.sciencedirect.com/science/article/abs/pii/S1566253523001537">[Paper]</a> <a href="https://github.com/VITA-Group/TinT">[Code]</a></li>
<li> W. Zheng*, E. Huang, N. Rao, S. Katariya, Z. Wang, and K. Subbian<br> <b style="color:rgb(71, 71, 71)">“You Only Transfer What You Share: Intersection-Induced Graph Transfer Learning for Link Prediction”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://arxiv.org/abs/2302.14189">[Paper]</a> <a href="https://github.com/amazon-science/gnn-tail-generalization">[Code]</a></li>
<li> X. Yang, Z. Wang, S. Hu, C. Kim, S. Yu, M. Pajic, R. Manohar, Y. Chen, and H. Li<br> <b style="color:rgb(71, 71, 71)">“Neuro-Symbolic Computing: Advancements and Challenges in Hardware-Software Co-Design”</b><br>IEEE Transactions on Circuits and Systems II (TCAS-II), 2023. <a href="https://ieeexplore.ieee.org/document/10327770">[Paper]</a> <a href="">[Code]</a></li>
<li> Z. Li*, T. Chen*, L. Li, B. Li, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Can Pruning Improve Certified Robustness of Neural Networks?”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://openreview.net/forum?id=6IFi2soduD">[Paper]</a> <a href="https://github.com/VITA-Group/CertifiedPruning">[Code]</a></li>
<li>H. Wang*, J. Hong, J. Zhou, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts”</b><br>Transactions on Machine Learning Research (TMLR), 2023. <a href="https://openreview.net/forum?id=11pGlecTz2">[Paper]</a> <a href="">[Code]</a></li>
<li>P. Narayanan, X. Hu, Z. Wu*, M. Thielke, J. Rogers, A. Harrison, J. D’Agostino, J. Brown, L. Quang, J. Uplinger, H. Kwon, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“A Multi-Purpose Real Haze Benchmark with Quantifiable Haze Levels and Ground Truth”</b><br>IEEE Transactions on Image Processing (TIP), 2023. <a href="https://arxiv.org/abs/2206.06427">[Paper]</a> <a href="https://a2i2-archangel.vision/">[Code]</a></li>
Expand Down Expand Up @@ -208,11 +210,11 @@ <h2>Conference Paper</h2>
<ul>
<li>Z. Fan*, K. Wang*, K. Wen, Z. Zhu*, D. Xu*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. (Spotlight) <a href="https://arxiv.org/abs/2311.17245">[Paper]</a> <a href="https://lightgaussian.github.io/">[Code]</a>
<li>H. Hu*, Z. Fan*, T. Wu, Y. Xi*, S. Lee*, G. Pavlakos, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Expressive Gaussian Human Avatars from Monocular RGB Video"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2407.03204">[Paper]</a> <a href="https://evahuman.github.io/">[Code]</a>
<li>R. Cai*, Y. Ro, G. Kim, P. Wang*, B. Bejnordi, A. Akella, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="">[Paper]</a> <a href="https://github.com/VITA-Group/READ-ME">[Code]</a>
<li>R. Cai*, Y. Ro, G. Kim, P. Wang*, B. Bejnordi, A. Akella, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://utns.cs.utexas.edu/assets/papers/neurips24-readme.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/READ-ME">[Code]</a>
<li>Z. Zhang*, R. Chen*, S. Liu*, Z. Yao, O. Ruwase, B. Chen, X. Wu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2403.04797">[Paper]</a> <a href="https://github.com/VITA-Group/Ms-PoE">[Code]</a>
<li>Z. Fan*, J. Zhang, W. Cong*, P. Wang*, R. Li, K. Wen, S. Zhou, A Kadambi, Z. Wang, D. Xu, B. Ivanovic, M. Pavone, and Y. Wang<br> <b style="color:rgb(71, 71, 71)">“Large Spatial Model: End-to-end Unposed Images to Semantic 3D”</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2410.18956">[Paper]</a> <a href="https://largespatialmodel.github.io/">[Code] </a>
<li>H. Yang*, B. Kailkhura, Z. Wang, and Y. Liang<br> <b style="color:rgb(71, 71, 71)">“Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2410.09605">[Paper]</a> <a href="">[Code] </a>
<li>H. Liang, Y. Yin, D. Xu*, H. Liang*, Z. Wang, K. Plataniotis, Y. Zhao, and Y. Wei<br> <b style="color:rgb(71, 71, 71)">“Diffusion4D: Fast Spatial-temporal Consistent 4D generation via Video Diffusion Models"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="">[Paper]</a> <a href="">[Code] </a>
<li>H. Liang, Y. Yin, D. Xu*, H. Liang*, Z. Wang, K. Plataniotis, Y. Zhao, and Y. Wei<br> <b style="color:rgb(71, 71, 71)">“Diffusion4D: Fast Spatial-temporal Consistent 4D generation via Video Diffusion Models"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2405.16645">[Paper]</a> <a href="https://github.com/VITA-Group/Diffusion4D">[Code] </a>
<li>H. Lu, Y. Zhou, S. Liu*, Z. Wang, M. Mahoney, and Y. Yang<br> <b style="color:rgb(71, 71, 71)">“AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models"</b><br>Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="https://arxiv.org/abs/2410.10912">[Paper]</a> <a href="https://github.com/haiquanlu/AlphaPruning">[Code] </a>
<li>X. Zhao, G. Sun, R. Cai*, Y. Zhou, P. Li, P. Wang*, B. Tan, Y. He, L. Chen, Y. Liang, B. Chen, B. Yuan, H. Wang, A. Li, Z. Wang, and T. Chen*<br> <b style="color:rgb(71, 71, 71)">“Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild”</b><br>Advances in Neural Information Processing Systems, Track on Datasets and Benchmarks (NeurIPS D & B), 2024. <a href="https://arxiv.org/pdf/2410.05357">[Paper]</a> <a href="https://github.com/Model-GLUE/Model-GLUE">[Code] </a> </li>
<li>Z. Zhu*, Z. Fan*, Y. Jiang*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting”</b><br>European Conference on Computer Vision (ECCV), 2024. <a href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/05583.pdf">[Paper]</a> <a href="https://github.com/VITA-Group/FSGS">[Code] </a> </li>
Expand Down

0 comments on commit 7652f7d

Please sign in to comment.