Keynote Speakers

Seamless Natural Communication between Humans and Machines

Abstract

Dialog systems such as Alexa and Siri are everywhere in our lives. They can complete tasks such as booking flights, making restaurant reservations and training people for interviews. However, currently deployed dialog systems are rule-based and cannot generalize to different domains, let alone flexible dialog context tracking.

We will first discuss how to design studies to collect realistic dialogs through a crowdsourcing platform. Then we introduce a dialog model that utilizes limited data to achieve good performance by leveraging multi-task learning and semantic scaffolds. We further improve the model's coherence by tracking both semantic actions and conversational strategies from dialog history using finite-state transducers.

Finally, we analyze some ethical concerns and human factors in dialog system deployment. All our work comes together to build seamless natural communication between humans and machines.


Short Bio

Zhou Yu is an Assistant Professor at the UC Davis Computer Science Department. Zhou will join the CS department at Columbia University in Jan 2021 as an Assistant Professor. She obtained her Ph.D. from Carnegie Mellon University in 2017. Zhou has built various dialog systems that have a real impact, such as a job interview training system, a depression screening system, and a second language learning system. Her research interest includes dialog systems, language understanding and generation, vision and language, human-computer interaction, and social robots. Zhou received an ACL 2019 best paper nomination, featured in Forbes 2018 30 under 30 in Science, and won the 2018 Amazon Alexa Prize.

FINDING NEMD

Abstract

The recent proliferation of conversational AI creatures is still superficially navigating on shallow waters with regards to language understanding and generation. Accordingly, these new types of creatures are failing to properly dive in the deep oceans of human-like usage of language and intelligence. FINDING NEMD (New Evaluation Metrics for Dialogue) is an epic journey across the seas of data and data-driven applications to tame its conversational AI creatures for the benefit of science and humankind.


Short Bio

Rafael is a Senior Research Scientist at Intapp Inc. His research focuses on applying NLP technologies to problems in the professional services industry. He is also Adjunct Associate Professor at Nanyang Technological University (NTU) in Singapore, where he supervises student projects in question answering and conversational agent related applications. He has previous experience organizing workshop at ACL and other International Conferences, including workshop series in Named Entities (NEWS), Conversational Agents (WOCHAT) and Machine Translation (HyTra).

Better dialogue generation!

Abstract

Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that rely on too much copying, contain repetitions, overuse frequent words, and at a deeper level, contain logical flaws. We describe how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability.

Short Bio

Jason Weston is a research scientist at Facebook, NY and a Visiting Research Professor at NYU. He earned his PhD in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ (advisors: Alex Gammerman, Volodya Vovk and Vladimir Vapnik) in 2000. Since then, he has worked at Biowulf technologies, the Max Planck Institute for Biological Cybernetics, and Google Research. Jason has published over 100 papers, including best paper awards at ICML and ECML, and a Test of Time Award for his work "A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning", ICML 2008 (with Ronan Collobert). He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. He was listed as the 16th most influential machine learning scholar at AMiner and one of the top 50 authors in Computer Science in Science.