For the upcoming TSD2023, the following outstanding and eminent keynote speakers with respectable expertise in the conference interest fields agreed to give their respective pieces of speech (in alphabetical order):

More to come! The members of the programme committee, together with the organizing crew, are negotiating with three more prospective fantastic keynote speakers for the upcoming TSD2023. As soon as we have an agreement, they will be introduced here...

See the next section below for details about the speeches (topics, abstracts).


Philippe Blache

Philippe Blache

Director of Research at the Laboratoire Parole et Langage (LPL), Institute of Language, Communication and the Brain – CNRS & Aix-Marseille University, France

Predictive Coding, Good-enough Processing and Constructions: A Neuro-cognitive Model for Dialogue

Abstract:  Language understanding is a complex task, integrating different sources of information, from sounds and gestures to context. However, in spite of its complexity, this process is extremely fast and robust, performed in real time during conversations. Many studies have shown that this robustness and efficiency are made possible by different mechanisms: the ability to predict, the possibility of directly accessing entire pieces of meaning and the possibility to perform a "good-enough" processing, sufficient to access the meaning. These mechanisms, by substituting to the classical incremental and compositional architecture, facilitate the access to the meaning. However, existing models do not explain precisely when these facilitation mechanisms are triggered and whether they inhibit or on the contrary work in parallel with the standard ones.

I propose in this presentation a new model integrating both facilitation and standard mechanisms by revisiting the different stages of the processing: segmentation of the input, access to the corresponding meaning in long-term memory and integration to the interpretation under construction. This architecture is based on different features: unique representation of linguistic objects (independently from their granularities), control of the memory access (in particular thanks to search space reduction) and multiple-level prediction. This neuro-cognitive model provides a new framework explaining how deep and shallow mechanisms of language processing can cohabit. It is also a good candidate for explaining different effects of mismatch observed at the brain level.

Biography:  Philippe Blache is the Director of Research at the CNRS. His works focus on language processing in a formal, computational and neuro-linguistics perspective. He is more specifically interested in studying the question of how humans process language in a natural context, typically a conversation. One part of his work consists in developping tools and methods for describing data collected in ecological contexts, taking into account the multimodal aspects of communication. The second and most important part of his work lies in theoretical aspects, aiming at elaborating a general cognitive model of language processing. Since recently, Philippe is adressing these questions by studying the neural basis of natural interaction, based on large and original multimodal datasets offering the possibility to explore the brain signal in conversations.

Philippe has been director of 2 CNRS research units (2LC, LPL), 1 Excellence Laboratory (Brain and Language Research Institute) and until recently director of the Institute of Language, Communication and the Brain. The ILCB is a major achievement, gathering 10 research units in linguistics (LPL), psychology (LPC, ISM, PRISM), neurosciences (INT, INS, LNC), mathematics (I2M) and computer science (LIS, LIA). It also federates 6 experimental platforms in brain imaging, virtual reality and primatology.

More information at https://www.lpl-aix.fr/contact/philippe-blache/.
Ivan Habernal

Ivan Habernal

Head of the Trustworthy Human Language Technologies (TrustHLT) Group – Department of Computer Science, Technische universität Darmstadt, Germany

Towards Privacy-Preserving Natural Language Processing

Abstract:  What does it mean for natural language processing (NLP) systems to protect privacy, and why should we even care? In this talk, we will explore privacy challenges and concerns in NLP and present possible solutions to address them. We will cover anonymization as well as formal techniques based on differential privacy both in training NLP models and in publishing data. Furthermore, we will also touch on legal and ethical implications when implementing privacy-preserving solutions in NLP.

Biography:  Ivan Habernal leads the independent research group "Trustworthy Human Language Technologies" at the Department of Computer Science, Technische Universität Darmstadt, Germany. In winter term 2022/23 he also holds an interim professorship in Computational Linguistics at Ludwig Maximilian University of Munich. His current research areas include privacy-preserving NLP, legal argument mining, and explainable and trustworthy models. His research track covers argument mining and computational argumentation, crowdsourcing, and serious games, among others.

More information at www.trusthlt.org.
Back to top...