I am a Ph.D. candidate @ University of California, San Diego, majoring in Cognitive Science.
Now, I am working with Prof. Virginia de Sa.
Representation Learning: I still believe in the representational approach on handling machine learning tasks, which although has been criticised deeply in philosophy that it might not be able to encode human intelligence thoroughly. In particular, I am interested in learning vectorised representations, including both localist and distributed ones, through unsupervised algorithms, as unlabelled data is massive in the wild. More interestingly, structured data, such as human languages, contains incredible amount of human knowledge and inductive biases that could be utilised to build machine learning algorithms and deep learning models for learning structured vector representations.
Transfer Learning: In my limited scope, it has two meanings, one is that information learnt from the data in one domain can be transferred to data in related/similar domains, and domains that possibly are far away in the source domain, such as learning representations on language corpora in general domains can be effective on dealing with tasks in specific domains; the other one is that the inductive biases discovered and embodied in the designed models can be transferred to other related/similar tasks, such as the same model built on logic entailment can also be applied to natural language entailment. Two aspects interact with each other, and have a wide impact on modern neural networks' design.
Natural Language Understanding: Distributional hypothesis/similarity still hasn't been fully utilised as current advanced machine learning systems only apply it in a fixed way. Although a better definition of language meaning is a hybrid of denotational and distributional semantics, building better machine learning systems based on distributional semantics is still important as it doesn't necessarily require labelled data. It is interesting to me that how we should apply distributional semantics on learning representations in a non-fixed or active way in terms of acquiring varying amount of contextual information.
A talk at IRASL workshop @ NeurIPS 2018, slides are here.
11/2018, two long papers, one with Virginia (poster) and the other one with Paul and Virginia (oral), accepted to IRASL workshop at NIPS2018. (scores: 4, 4, 4 out of 5 for both papers)
10/2018, invited to be an inaugural member of ACL Special Interest Group on Representation Learning (SIGREP).
06/2018-09/2018, research internship at Microsoft Research, Redmond, working with Prof. Paul Smolensky.
09/2018, a long paper rejected by Neural Information Processing Systems (NIPS2018). (scores: 5, 6, 6 out of 10)
08/2018, a short paper rejected by Empirical Methods in Natural Language Processing (EMNLP2018). (scores: 2, 3, 3 out of 5)
05/2018, a long paper with Hailin, Chen, Zhaowen and Virginia, accepted to 3rd Workshop on Representation Learning for NLP. (scores: 3, 4 out of 5)
04/2018, two papers rejected by the annual meeting of Association of Computational Linguisitics (ACL2018). (scores: 4, 4, 4 out of 6, and 4, 4, 4 out of 6)
01/2018, one paper rejected by the International Conference on Learning Representations (ICLR2018). (scores: 3, 6, 7 out of 10)
09/2017, one paper rejected by the annual conference on Neural Information Processing Systems (NIPS2017). (scores: 4, 5, 6 out of 10)
06/2017-09/2017, research internship at Adobe research, working on text-location based image search.
05/2017, a long paper with Hailin, Chen, Zhaowen, and Virginia, accepted to 2nd Workshop on Representation Learning for NLP. (scores: 3, 4 out of 5)
04/2017, de Sa Lab, working with Prof. Virginia de Sa.
06/2016-09/2016, research internship at Adobe research, working on unsupervised sentence representation learning.
01/2016-04/2016, lab rotation in Prof. Garrison W. Cottrell's lab, working on the recurrent attention model for fine-grained classification tasks.
09/2015-12/2015, Machine Learning, Perception, and Cognition Lab, working with Prof. Zhuowen Tu .
01/2015-04/2015, B.Sc. in Information Science @ Zhejiang University, working with Prof. Zhiyu Xiang.
 Shuai Tang, Andrej Zukov-Gregoric, "Conference Notes / ACL2018".
 Shuai Tang, Paul Smolensky, Virginia de Sa, "Learning Distributed Representations of Symbolic Structure Using Binding and Unbinding Operations", (ArXiv, 2018).
[2c] Shuai Tang, Virginia de Sa, "Improving Sentence Representations with Multi-view Frameworks", (ArXiv, 2018).
[2b] Shuai Tang, Virginia de Sa, "Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning", (ArXiv,2018).
[2a] Shuai Tang, Virginia de Sa, "Multi-view Sentence Representation Learning", (ArXiv,2018).
 Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa, "Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding", (RepL4NLP, 2018).
 Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa, "Trimming and Improving Skip-thought Vectors", (ArXiv,2017).
 Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa, "Rethinking Skip-thought: A Neighborhood based Approach", (RepL4NLP,2017).
COGS118B Intro to Machine Learning II (2018 Fall)
COGS108 Data Science in Practice (2018 Winter)
COGS118B Intro to Machine Learning II (2017 Fall)
COGS181 Neural Networks and Deep Learning (2017 Winter)
COGS118B Intro to Machine Learning II (2016 Fall)
COGS118A Intro to Machine Learning I (2016 Winter)