Romain

Romain Laroche

I am a senior researcher on dialogue systems and reinforcement learning.

Short biography

Born in 1979, I graduated from Ecole Polytechnique in 2001 (X98) and from Telecom Paristech in 2003. Then, I joined Orange in the Agent and Natural Dialogue team, to work on the Artimis dialogue technology built on the belief-desire-intention logic and the theory of interaction. While scientifically stimulating, this approach showed its limits in the difficulty for a non BDI-logic expert to design an application. From 2007, a more classical automata-based dialogue model was adopted and developed. At the same time I started a PhD at Université Pierre & Marie Curie (LIP6), to obtain the doctoral degree in 2010. During my PhD, I specialised in reinforcement learning applied to dialogue systems of any kind: interactive voice response, personal assistants, text-based conversational agent, ... and in any domain: customer relationship management, appointement scheduling, restaurant search, smart home, multimedia management, ... I'm now the scientific expert on dialogue systems at Orange Labs. I have authored or co-authored more than 30 international peer-reviewed publications.

Topics of interest

Dialogue Systems, Spoken Dialogue Systems, Text-based Dialogue Systems, Multimodal Dialogue Systems, Interactive Voice Response, Personal Assistant, Voice-controlled Smart Home.

Reinforcement Learning, Inverse Reinforcement Learning, Transfer Learning, Active Learning, Multi-Armed Bandits, Algorithm Selection, Meta-Learning, Machine Learning, Markov Decision Processes, Partially-Observable Markov Decision Processes.

Dempster-Shafer Theory, Theory of Hints, Bayesian Networks, Markov Chains, Fuzzy Logic, Possibility Theory.

Belief-Desire-Intention Logic, First-Order Logic, Argumentation Theory.

Laroche R., Bretier P., and Bouchon-Meunier B. (2008) Uncertainty Management in Dialogue Systems. IPMU.

Laroche R., Putois G., Bretier P., and Bouchon-Meunier B. (2009) Hybridisation of expertise and reinforcement learning in dialogue systems. Interspeech.

Laroche R., Putois G., and Bretier P. (2010) Optimising a handcrafted dialogue system design. Interspeech.

Laroche R., Bretier P., and Putois G. (2010) Enhanced monitoring tools and online dialogue optimisation merged into a new spoken dialogue system design experience. Interspeech.

Laroche R. (2010) Raisonnement sur les incertitudes et apprentissage pour les systèmes de dialogue conventionnels. Thèse.

Putois G., Bretier P., and Laroche R. (2010) Online Learning for Spoken Dialogue Systems: The Story of a Commercial Deployment Success. SIGDIAL.

El Asri L., Laroche R., and Pietquin O. (2012) Reward Function Learning for Dialogue Management. STAIRS.

El Asri L., and Laroche R. (2013) Will my Spoken Dialogue System be a Slow Learner?. SIGDIAL.

El Asri L., Laroche R., and Pietquin O. (2013) Reward Shaping for Statistical Optimisation of Dialogue Management. SLSP.

Laroche R., Dziekan,J., Roussarie L., and Baczyk P. (2013) Cooking Coach Spoken/Multimodal Dialogue Systems. CwC (IJCAI workshop).

Bouneffouf D., Laroche R., Urvoy T., Feraud R., and Allesiardo R. (2014) Contextual Bandit for Active Learning: Active Thompson Sampling, ICONIP.

Ekeinhor-Komi T., Falih H., Chardenon C., Laroche R., and Lefevre F. (2014) Un assistant vocal personnalisable, TALN (démonstration).

El Asri L., Laroche R., and Pietquin O. (2014) Régression Ordinale pour la Prédiction de la Qualité d'Intéraction. JEP.

El Asri L., Lemonnier R., Laroche R., Pietquin O., and Khouzaimi H. (2014) NASTIA: Negotiating Appointment Setting Interface, LREC.

El Asri L., Laroche R., and Pietquin O. (2014) DINASTI: Dialogues with a Negotiating Appointment Setting Interface. LREC.

El Asri L., Khouzaimi H., Laroche R., and Pietquin O. (2014) Ordinal regression for interaction quality prediction. IEEE ICASSP.

El Asri L., Laroche R., and Pietquin O. (2014) Task Completion Transfer Learning for Reward Inference. MLIS (AAAI workshop).

Khouzaimi H., Laroche R. and Lefèvre F. (2014) DictaNum : système de dialogue incrémental pour la dictée de numéros, TALN (démonstration).

Khouzaimi H., Laroche R. and Lefèvre F. (2014) Vers une approche simplifiée pour introduire le caractère incrémental dans les systèmes de dialogue, TALN.

Khouzaimi H., Laroche R. and Lefèvre F. (2014) An easy method to make dialogue systems incremental, SIGDIAL.

Laroche R. (2014) CFAsT: Content-Finder AssistanT, TALN (démonstration).

Barlier M., Perolat J., Laroche R. and Pietquin O. (2015) Human-Machine Dialogue as a Stochastic Game, SIGDIAL.

Khouzaimi H., Laroche R. and Lefèvre F. (2015) Dialogue Efficiency Evaluation of Turn-Taking Phenomena in a Multi-Layer Incremental Simulated Environment, HCI.

Khouzaimi H., Laroche R. and Lefèvre F. (2015) Optimising Turn-Taking Strategies With Reinforcement Learning, SIGDIAL.

Khouzaimi H., Laroche R. and Lefèvre F. (2015) Turn-taking phenomena in incremental dialogue systems, EMNLP.

Laroche R. (2015) Content Finder Assistant, ICIN.

El Asri L., Laroche R., and Pietquin O. (2016) Compact and interpretable dialogue state representation with Genetic Sparse Distributed Memory, IWSDS.

Khouzaimi H., Laroche R. and Lefèvre F. (2016) Incremental human-machine dialogue simulation, IWSDS.

Laroche R., and Genevay A. (2016) The dialogue negotiation game, IWSDS.

Genevay A., Laroche R. (2016) Transfer learning for user adaptation in spoken dialogue systems, AAMAS.

El Asri L., Piot B., Geist M., Laroche R. and Pietquin O. (2016) Score-based Inverse Reinforcement Learning, AAMAS.

Khouzaimi H., Laroche R. and Lefèvre F. (2016) Reinforcement Learning applied to  Incremental Spoken Dialogue Systems, IJCAI (to appear).

Barlier M., Laroche R., and Lefèvre F. (2016) A Stochastic Model For Computer-Aided Human-Human, Interspeech (to appear).