*KEYNOTE SPEAKERS

For the upcoming TSD2021, the following outstanding and eminent keynote speakers with respectable expertise in the conference interest fields agreed to give their respective pieces of speech (in alphabetical order):


See the next section below for details about the speeches (topics, abstracts).

SPEAKERS' & SPEECHES' DETAILS

Lucie Flek

Lucie Flek

Associate Professor & Head of the Conversational AI and Social Analytics (CAISA) Lab – Department of Mathematics and Computer Science, Philipps-Universität Marburg, Germany

User-centric Natural Language Understanding


Abstract:  People express themselves in different ways – due to their individual characteristics, communication goals, cultural background, affinity to various sociodemographic groups, or just as a matter of personal style. Leveraging these differences can be beneficial for NLP applications. In this talk, I explore methods for interpreting the language together with its user-dependent aspects – personal history, beliefs, and social environment – and their effect on social NLP tasks. I further discuss implications of this research for conversational agents, and reflect on ethical risks of social prototyping.

Biography:  Lucie Flek is an Associate Professor at the Philipps-Universität Marburg, leading the newly formed research group on Conversational AI and Social Analytics. Previously, she has managed natural language understanding research programs in Amazon Alexa. In her academic work at TU Darmstadt, Positive Psychology Center at University of Pennsylvania, and University College London, she has been focusing on psychological and social applications of stylistic variation insights. She has served as Area Chair for Computational Social Sciences at multiple ACL* conferences, and is a co-organizer of the Stylistic Variation workshop and Widening NLP. Before her research path in natural language processing, Lucie has been contributing to particle physics research in the area of axion searches.

Lucie's interests lie in machine learning applications in the field of Natural Language Processing (NLP), with a core expertise in the area of user modeling and stylistic variation. She has been investigating how various individuals and sociodemographic groups differ in their language usage, and how this variation can be in return used in machine learning tasks to predict in-group behavior of interest. This also led her to a broader interest in the bias that the NLP field is subject to, in stereotype exaggeration, ethics issues, performance of machine learning models on underrepresented groups, and subsequently domain adaptation of the machine learning models.

Her Ph.D. thesis was focused on lexical semantics – examining to which extent word ambiguity and context plays a role in document classification tasks. When is the provided context sufficient for the task at hand, following distributional hypothesis, and when does explicit word sense disambiguation, concept graphs or semantic ontologies become beneficial? Do these findings hold with the rise of deep learning architectures? Does explicitly supplied lexical semantic information still improve classification tasks in scenarios with limited training data?

She has been further pursuing the limited training data paradigm in industry, leading projects related to multilingual and multitask learning, as well as various other bootstrapping efforts for limited in-domain labeled data. Lucie is a big fan of cross-disciplinary collaborations, publishing together with educational researchers, psychologists, sociologists, physicists, and visual analysts, among others.
Kate Knill

Kate Knill

Principal Research Associate – Machine Intelligence Laboratory, Information Engineering Division (F), Dept. of Engineering, University of Cambridge, United Kingdom

Use of Deep Learning in Free Speaking Non-native English Assessment


Abstract:  More than 1.5 billion people worldwide use English as an additional language which has created a large demand for teaching and assessment. To help meet this need automatic assessment systems can provide support, and an alternative, to human examiners. The ability to provide remote assessment has become even more important with the COVID-19 pandemic. Learners and teachers can benefit from online systems available 24/7 to monitor their progress whenever and wherever the learners like. Free speaking tests where open responses are given to prompted questions allow a learner to demonstrate their proficiency at speaking English. This presents a number of challenges with their spontaneous speech not known in advance. An auto-marker must be able to accurately assess this free speech independent of the speaker's first language (L1) and the audio recording quality which can vary considerably. This talk will look at how deep learning can be applied to help solve free speaking spoken non-native English assessment.

Biography:  Kate Knill is a Principal Research Associate at Cambridge University primarily working on spoken language teaching and assessment within the Automatic Language Teaching and Assessment Institute (ALTA) Institute. Previously she worked on the rapid development of speech systems for new languages on the BABEL (IARPA funded) project.

She received a 1st class B.Eng. (Jt Hons) degree in Electrical Engineering and Mathematics from the University of Nottingham in 1990 and a Ph.D. degree from Imperial College, London University, London, U.K., in 1994. Her doctoral thesis was in the area of Adaptive Digital Signal Processing, supervised by Prof. Anthony Constantinides. Kate was sponsored on both degrees and a pre-university year by Marconi Underwater Systems Ltd.

She has worked in speech technology since 1993. From 1993 to 1996, she was a Research Associate in the Speech Vision and Robotics Group in the Engineering Department at Cambridge University working on audio document retrieval, supervised by Prof. Steve Young and funded by HP Labs, Bristol. She joined the Speech R&D team of Nuance Communications in 1997. As Languages Manager (2000 – 2002), she led a cross-site team that developed over 20 languages for speech recognition and speaker verification. In 2002, she established a new Speech Technology Group at Toshiba Research Europe, Cambridge Research Laboratory (CRL), Cambridge, U.K. As Assistant Managing Director and Speech Technology Group Leader she was responsible for interactive technology, in particular core speech recognition and synthesis R&D and development of European and North American speech products. The Cambridge team led the creation of new speech recognition and speech synthesis engines for Toshiba for which Kate served as project lead across sites in the UK, Japan and China.

Kate is a member of the IEEE, IET and ISCA. She is a member of the ISCA Board (2013 – 2021) and is currently Secretary of ISCA. She is a member of the IEEE James L. Flanagan Speech and Audio Processing Award committee. She was a member of the IEEE Speech and Language Technical Committee in 2009 – 2012. She was an Area Chair for Interspeech 2009, 2012, 2014, 2015 and 2017 and Publications Co-Chair for SLT 2012, and is an Associate Editor for EURASIP Journal of Audio, Speech and Music Processing. Kate chaired the technical sub-committee of the Digital TV Group (DTG) Usability Text-to-Speech Synthesis sub group.
Olga Vechtomova

Olga Vechtomova

Associate Professor – Natural Language Processing Laboratory, Department of Management Sciences, Faculty of Engineering, School of Computer Science, University of Waterloo, Canada

Poetics of AI: Models for Stylized Text Generation


Abstract:  Style is a characteristic of text used by authors to effectively communicate content to their readers, as well as evoke specific emotional responses. Generative models of text should therefore not only be able to produce texts with meaningful and coherent content, but also texts that exhibit specific stylistic characteristics. With the advances in neural text generative models in the past few years, there has been an increased research interest in stylized text generation and text style transfer. In this talk I will first discuss what is style and whether it is possible to view it separately from content. I will then give an overview of the state-of-the-art approaches to stylized text generation and their applications, such as generation of poetry, formal vs. informal texts, as well as texts that conform to a certain genre. I will also discuss existing challenges with evaluating the outputs of stylized text generative models, especially such difficult to measure characteristics as creativity and originality.

Biography:  Olga Vechtomova leads the Natural Language Processing Lab at the University of Waterloo, with the research focus on neural text generative models. The lab's current research projects include dialogue models, text style transfer, stylized text generation, multimodal learning, creative and artistic applications of text generative models. She received her undergraduate degree in Linguistics from Volgograd State University, Russia, her Masters in Information Systems and PhD in Information Science, both from City University London, UK.

Her lab's research contributions over the recent years include a variational attention model, which enables generation of diverse and contextually relevant responses in a dialogue system (COLING'18), a stochastic Wasserstein autoencoder for text generation (NAACL'19), a neural text style transfer model (ACL'19), a syntax-semantics disentanglement model for text generation (ACL'19), text simplification (ACL'20), text summarization (ACL'20), lyrics generation conditioned on music audio (NLP4MusA'20), and adversarial learning on the latent space for dialogue generation (COLING'20). In 2020, she co-presented a tutorial on stylized text generation at ACL.

Olga's research has been supported by a number of industry and government grants, including Amazon Research Award and Natural Sciences and Engineering Research Council of Canada. She has over 50 publications in NLP and Information Retrieval conferences and journals. In 2019, she and her colleagues received the ACM SIGIR Test of Time Award.
Ivan Vulić

Ivan Vulić

Senior Research Associate – Language Technology Lab (LTL), Department of Theoretical and Applied Linguistics (DTAL), Faculty of English, University of Cambridge, United Kingdom

Cross-Lingual Knowledge Transfer and Adaptation in Low-Data Regimes: Achievements, Trends, and Challenges


Abstract:  A key challenge in cross-lingual NLP is developing general language-independent architectures that will be equally applicable to any language. However, this ambition is hindered by the large variation in 1) structural and semantic properties of the world’s languages, as well as 2) raw and task data scarcity for many different languages, tasks, and domains. As a consequence, existing language technology is still largely limited to a handful of resource-rich languages. In this talk, we introduce and discuss a range of recent techniques and breakthroughs that aim to deal with such large cross-language variations and low-data regimes efficiently. We cover a range of cutting-edge approaches including adapter-based models for cross-lingual transfer, contextual parameter generation and hypernetworks, learning in few-shot and zero-shot scenarios, and typologically driven learning and source selection. Finally, this talk demonstrates that low-resource languages, despite very positive research trends and results achieved in recent years, still lag behind major languages, and outline several key challenges for future research in this area.

Biography:  Ivan Vulić is a Senior Research Associate in the Language Technology Lab, University of Cambridge and a Senior Scientist at PolyAI. He holds a PhD in Computer Science from KU Leuven awarded summa cum laude. His core expertise is in representation learning, cross-lingual learning, human language understanding, distributional, lexical, multi-modal, and knowledge-enhanced semantics in monolingual and multilingual contexts, transfer learning for enabling cross-lingual NLP applications such as conversational AI in low-resource languages, and machine learning for (cross-lingual) NLP. He has published more than 100 papers at top-tier NLP and IR conferences and journals. He co-lectured a tutorial on word vector space specialization at EACL 2017, ESSLLI 2018, EMNLP 2019, and tutorials on cross-lingual representation learning and cross-lingual NLP at EMNLP 2017 and ACL 2019. He also co-lectured tutorials on conversational AI at NAACL 2018 and EMNLP 2019. He co-authored a book on cross-lingual word representations for the Morgan & Claypool Handbook series, published in June 2019. He serves as an area chair and regularly reviews for all major NLP and Machine Learning conferences and journals. Ivan has given invited talks at academia and industry such as Apple Inc., University of Cambridge, UCL, University of Copenhagen, Paris-Saclay, Bar-Ilan University, Technion IIT, University of Helsinki, UPenn, KU Leuven, University of Stuttgart, TU Darmstadt, London REWORK summit, University of Edinburgh, etc. He co-organised a number of NLP workshops, served as the publication chair for ACL 2019, and currently serves as the tutorial chair for EMNLP 2021 and the program chair for *SEM 2021.
Back to top...