Bio.

I am a Ph.D. candidate @ University of California, San Diego, majoring in Cognitive Science.

Now, I am working with Prof. Virginia de Sa.

Research Interests

Representation Learning: I still believe in the representational approach on handling machine learning tasks, which although has been criticised deeply in philosophy that it might not be able to encode human intelligence thoroughly. In particular, I am interested in learning vectorised representations, including both localist and distributed ones, through unsupervised algorithms, as unlabelled data is massive in the wild. More interestingly, structured data, such as human languages, contains incredible amount of human knowledge and inductive biases that could be utilised to build machine learning algorithms and deep learning models for learning structured vector representations.

Transfer Learning: In my limited scope, it has two meanings, one is that information learnt from the data in one domain can be transferred to data in related/similar domains, and domains that possibly are far away in the source domain, such as learning representations on language corpora in general domains can be effective on dealing with tasks in specific domains; the other one is that the inductive biases discovered and embodied in the designed models can be transferred to other related/similar tasks, such as the same model built on logic entailment can also be applied to natural language entailment. Two aspects interact with each other, and have a wide impact on modern neural networks' design.

Natural Language Understanding: Distributional hypothesis/similarity still hasn't been fully utilised as current advanced machine learning systems only apply it in a fixed way. Although a better definition of language meaning is a hybrid of denotational and distributional semantics, building better machine learning systems based on distributional semantics is still important as it doesn't necessarily require labelled data. It is interesting to me that how we should apply distributional semantics on learning representations in a non-fixed or active way in terms of acquiring varying amount of contextual information.

Current Status

Working as a teaching assistant for COGS118B, Introduction to machine learning II

We merged our previously rejected two papers into one with more experiments, and interestingly, they led to the same conclusion that, in multi-view unsupervised learning, RNN and linear model help each other to perform better. Check out our paper. Here.

News

06/2018-09/2018, research internship at Microsoft Research, Redmond, working with Prof. Paul Smolensky.

09/2018, Andrej Zukov-Gregoric and I met at ACL2018, and we decided to put our notes together in an organised file. Here it is.

09/2018, a long paper rejected by Neural Information Processing Systems (NIPS2018). (scores: 5, 6, 6 out of 10)

08/2018, a short paper rejected by Empirical Methods in Natural Language Processing (EMNLP2018). (scores: 2, 3, 3 out of 5)

05/2018, advanced to PhD candidate, committee members: Virginia de Sa, Ben Bergen, Eran Mukamel, Lawrence Saul, Ndapa Nakashole, and Rich Zemel. (slides)

05/2018, a long paper with Hailin, Chen, Zhaowen and Virginia, accepted to 3rd Workshop on Representation Learning for NLP. (scores: 3, 4 out of 5)

04/2018, two papers rejected by the annual meeting of Association of Computational Linguisitics (ACL2018). (scores: 4, 4, 4 out of 6, and 4, 4, 4 out of 6)

01/2018, one paper rejected by the International Conference on Learning Representations (ICLR2018). (scores: 3, 6, 7 out of 10)

11/2017, speaking at UCSD AI Seminar about my research on sentence representation learning. (slides)

09/2017, one paper rejected by the annual conference on Neural Information Processing Systems (NIPS2017). (scores: 4, 5, 6 out of 10)

06/2017-09/2017, research internship at Adobe research, working on text-location based image search.

05/2017, a long paper with Hailin, Chen, Zhaowen, and Virginia, accepted to 2nd Workshop on Representation Learning for NLP. (scores: 3, 4 out of 5)

04/2017, de Sa Lab, working with Prof. Virginia de Sa.

06/2016-09/2016, research internship at Adobe research, working on unsupervised sentence representation learning.

01/2016-04/2016, lab rotation in Prof. Garrison W. Cottrell's lab, working on the recurrent attention model for fine-grained classification tasks.

09/2015-12/2015, Machine Learning, Perception, and Cognition Lab, working with Prof. Zhuowen Tu .

01/2015-04/2015, B.Sc. in Information Science @ Zhejiang University, working with Prof. Zhiyu Xiang.

Publications

[0] Shuai Tang, Andrej Zukov-Gregoric, "Conference Notes / ACL2018".

[1c] Shuai Tang, Virginia de Sa, "Improving Sentence Representations with Multi-view Frameworks", (ArXiv, 2018).

[1b] Shuai Tang, Virginia de Sa, "Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning", (ArXiv,2018).

[1a] Shuai Tang, Virginia de Sa, "Multi-view Sentence Representation Learning", (ArXiv,2018).

[2] Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa, "Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding", (RepL4NLP, 2018).

[3] Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa, "Trimming and Improving Skip-thought Vectors", (ArXiv,2017).

[4] Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia de Sa, "Rethinking Skip-thought: A Neighborhood based Approach", (RepL4NLP,2017).

Teaching

  • Teaching Assistant
    COGS118B Intro to Machine Learning II (2018 Fall)
    COGS108 Data Science in Practice (2018 Winter)
    COGS118B Intro to Machine Learning II (2017 Fall)
    COGS181 Neural Networks and Deep Learning (2017 Winter)
    COGS118B Intro to Machine Learning II (2016 Fall)
    COGS118A Intro to Machine Learning I (2016 Winter)
  • Get In Touch

    • Address

      University of California, San Diego
      Cognitive Science Department
      9500 Gilman Drive #0515
      La Jolla, CA, USA 92093-0515
    • Email

      shuaitang93@ucsd.edu