The Deep Learning group’s mission is to advance the state-of-the-art on deep learning and its application to natural language processing, computer vision, multi-modal intelligence, and for making progress on conversational AI. Our research interests are:
- Neural language modeling for natural language understanding and generation. Some ongoing projects are MT-DNN (opens in new tab), UniLM (opens in new tab), DeBERTa, question-answering, long text generation, etc.
- Neural symbolic computing. We are developing next-generation architectures to bridge gap between neural and symbolic representations with neural symbols. Some ongoing projects are relational encoding using Tensor-Product Representations, AI for Code, etc.
- Vision-language grounding and understanding. Some ongoing projects are UniCL (opens in new tab), VinVL, OSCAR (opens in new tab), vision-language pre-training (opens in new tab), vision language navigation, image editing and generation, image commenting and captioning, etc.
- Conversational AI (opens in new tab). Some ongoing projects are conversation learner (opens in new tab) and SOLOIST (opens in new tab) which enable dialog authors to build task-oriented dialog systems at scale via machine teaching and transfer learning, ConvLab (opens in new tab) which is an open-source multi-domain dialog system platform, and response generation for social bots such as Microsoft XiaoIce (opens in new tab), etc.
- Fundamental research in understanding and scaling large neural networks. For example, maximal update parametrization (µP) and µTransfer, the feature learning limit of neural networks, and, more generally, the theory of Tensor Programs.