Education

Ph.D. in Cognitive Science.

B.Eng. in Information Science and Communication Engineering.

Research Interests

Representation Learning: I am interested forming and analysing representations of machine learning models efficiently. The representations themselves include vectors, matrices, and graphs.

Linear Models: At the end of the day, we are all using linear regression.

Current Status

Quant Researcher at Jump Trading

News

09/2024, joined Jump Trading as a Quant Researcher.

08/2024, left AWS. During my three years at AWS, I managed to contribute to three products, including AWS CodeWhiperer, AWS Cleanrooms Differential Privacy and AWS Cleanrooms Machine Learning. I also published five papers, including three first-authored papers, and one under review, and filed two US patents.

10/2023, speaking as a Distinguished Speaker at the University of North Dakota on privacy-related topics.

08/2023, speaking at the Federated Learning & Privacy Seminar at Google on our work on membership inference attacks via quantile regression.

04/2023, speaking at the Privacy Seminar at Google on our work on private boosting with linear base models.

12/2022, speaking as a panelist at SyntheticData4MLWorkshop at NeurIPS.

05/2021, Applied Scientist at Amazon Web Services.

02/2021, finished Ph.D. in Cognitive Science. (mostly machine learning stuff)

04/2017-02/2021, working with Prof. Virginia R. de Sa .

06/2020-09/2020, Research internship at Amazon in Seattle, WA. (Virtual)

04/2020, speaking at UCSD AI Seminar about my research on comparing neural networks with both features and gradients.

03/2020, a talk @ Vision research group led by Leonid Sigal at the University of British Columbia

06/2019-12/2019, Research internship at Amazon in Cambridge, UK.

11/2019, a talk @ CLunch at the University of Pennsylvania.

11/2019, a talk @ Computational Linguistics group led by Paul Smolensky at the Johns Hopkins University

10/2019, a talk @ Language Technology Lab at the University of Cambridge. (Link to the abstract click here)

06/2019, attending Amazon re:MARS 2019

03/2019, attending the Amazon Grad Research Symposium in Seattle, USA.

02/2019, speaking at UCSD AI Seminar about my research on learning distributed representations of sentences. (slides)

12/2018, a talk at IRASL workshop @ NeurIPS 2018, slides are here.

10/2018, invited to be an inaugural member of ACL Special Interest Group on Representation Learning (SIGREP).

06/2018-09/2018, research internship at Microsoft Research, Redmond, working with Prof. Paul Smolensky.

05/2018, advanced to PhD candidate, committee members: Virginia de Sa, Ben Bergen, Eran Mukamel, Lawrence Saul, and Ndapa Nakashole. (slides)

11/2017, speaking at UCSD AI Seminar about my research on sentence representation learning. (slides)

06/2017-09/2017, research internship at Adobe research, working on text-location based image search.

09/2015-12/2017, Machine Learning, Perception, and Cognition Lab, working with Prof. Zhuowen Tu .

06/2016-09/2016, research internship at Adobe research, working on unsupervised sentence representation learning.

01/2015-04/2015, B.Sc. in Information Science @ Zhejiang University, working with Prof. Zhiyu Xiang.

Publications

Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable, Martin Bertran*, Shuai Tang*, Michael Kearns, Jamie Morgenstern, Aaron Roth, Zhiwei Steven Wu (*equal contribution)

Membership Inference Attacks on Diffusion Models via Quantile Regression, Shuai Tang*, Zhiwei Steven Wu*, Sergul Aydore, Michael Kearns, Aaron Roth (*equal contribution, ICML 2024)

Improved Differentially Private Regression via Gradient Boosting, Shuai Tang, Sergul Aydore, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu ( IEEE SaTML 2024 )

Scalable Membership Inference Attacks via Quantile Regression, Martin Bertran*, Shuai Tang*, Michael Kearns, Jamie Morgenstern, Aaron Roth, Zhiwei Steven Wu (*equal contribution, NeurIPS 2023)

"Private Synthetic Data for Multitask Learning and Marginal Queries", Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu, (NeurIPS 2023).

Differentially Private Gradient Boosting on Linear Learners for Tabular Data, Saeyoung Rho, Cedric Archambeau, Sergul Aydore, Beyza Ermis, Michael Kearns, Aaron Roth, Shuai Tang, Yu-Xiang Wang, Steven Wu, (TSRML, NeurIPS 2023)

"Improving Robustness in Motor Imagery Brain-Computer Interfaces", Mahta Mousavi, Eric Lybrand, Shuangquan Feng, Shuai Tang, Rayan Saab, Virginia R. de Sa, (DistShift, NeurIPS2021).

"Fast Adaptation with Linearized Neural Networks", Wesley J. Maddox, Shuai Tang, Pablo G. Moreno, Andrew G. Wilson, Andreas Damianou, (AISTATS2021).

"Deep Transfer Learning with Ridge Regression", Shuai Tang, Virginia R. de Sa, (ArXiv, 2020)

"Similarity of Neural Networks with Gradients", Shuai Tang, Wesley J. Maddox, Charlie Dickens, Tom Diethe, Andreas Damianou , (ArXiv, 2020)

"Supervised Spike Sorting Using Deep Convolutional Siamese Network and Hierarchical Clustering", Yinghao Li, Shuai Tang, Virginia R. de Sa, (2019).

"An Empirical Study on Post-processing Methods for Word Embeddings", Shuai Tang, Mahta Mousavi, Virginia de Sa, (ArXiv, 2019).

"Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning", Shuai Tang, Virginia de Sa , (ACL2019).

"A Simple Recurrent Unit with Reduced Tensor Product Representations", Shuai Tang, Paul Smolensky, Virginia de Sa, , (ArXiv, 2019).

"Improving Sentence Representations with Consensus Maximisation", Shuai Tang, Virginia de Sa, (IRASL, NeurIPS2018).

"Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding", Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa , (RepL4NLP, ACL2018).

"Trimming and Improving Skip-thought Vectors", Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa , (ArXiv, 2017).

"Rethinking Skip-thought: A Neighborhood based Approach", Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa , (RepL4NLP, ACL2017).

"What Happened to My Dog in That Network: Unraveling Top-down Generators in Convolutional Neural Networks"> Patrick W. Gallagher, Shuai Tang, and Zhuowen Tu , (ArXiv,2015).

Conference Notes

[0] Shuai Tang, Andrej Zukov-Gregoric, "Conference Notes / ACL2018".

Academic Services

  • Member
    2018: Inaugural member of ACL Special Interest Group on Representation Learning (SIGREP)
  • Reviewer
    ACL, NAACL, EACL, AACL, EMNLP,
    NeurIPS, ICML, ICLR, AISTATS, AAAI
    TNNLS, TMLR

  • Teaching

  • Teaching Assistant
    COGS 118B Intro to Machine Learning II (2020 Fall)
    COGS 9 Intro to Data Science (2020 Winter)
    COGS118B Intro to Machine Learning II (2018 Fall)
    COGS108 Data Science in Practice (2018 Winter)
    COGS118B Intro to Machine Learning II (2017 Fall)
    COGS181 Neural Networks and Deep Learning (2017 Winter)
    COGS118B Intro to Machine Learning II (2016 Fall)
    COGS118A Intro to Machine Learning I (2016 Winter)
  • Get In Touch