Ruo Yu (David) Tao

Computer Science Ph.D. Student @ Brown
Email: ruoyutao "at" brown "dot" edu
Resume: here

I'm a PhD student researching reinforcement learning in the Intelligent Robot Lab at Brown University advised by George Konidaris. My current research interests and work are on agent-state and representation learning in partially observable environments for reinforcement learning. I also think eligibility traces are super neat.

Previously, I was a Computing Science M.Sc. (Thesis) student at the University of Alberta, co-advised by Adam White and Marlos C. Machado in the RLAI lab. Even further in the distant past, I was a First Class Honours and Dean's Honour List student studying Computer Science at McGill University for my B.Sc. I was previously a research intern at both Mila and Microsoft Research.

If you're interested in collaborating, feel free to send me an email.

Publications

Auxiliary Inputs for Agent-State Construction

Ruo Yu Tao, Adam White, Marlos C. Machado

Transactions on Machine Learning Research (TMLR), 2023

Measuring and Mitigating Interference in Reinforcement Learning

Vincent Liu, Han Wang, Ruo Yu Tao, Khurram Javed, Adam White, Martha White

Conference on Lifelong Learning Agents (CoLLAs), 2023

Novelty Search in Representational Space for Sample Efficient Exploration

Ruo Yu Tao, Vincent François-Lavet, Joelle Pineau

Oral presentation, Neural Information Processing Systems (NeurIPS), 2020

Towards Solving Text-Based Games by Producing Adaptive Action Spaces

Ruo Yu Tao, Marc-Alexandre Côté, Xingdi Yuan, Layla El Asri

Oral presentation, WordPlay Workshop at Neural Information Processing Systems (NeurIPS), 2018

TextWorld: A Learning Environment for Text-Based Games

Marc-Alexandre Côté, Ákos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Ruo Yu Tao, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, Adam Trischler

Computer Games Workshop at International Joint Conferences on Artificial Intelligence (IJCAI), 2018

Education

Brown University

Ph.D. Student, 2022 - Current

Advisor: George Konidaris.

University of Alberta

M.Sc. (Thes), 2020 - 2022

Advisors: Adam White and Marlos C. Machado.

McGill University

Hons. B.Sc., 2016 - 2020

Undergraduate research advisor: Joelle Pineau.
First Class Honors, Dean's List.

Experience

National University of Singapore (NUS)

Over the summer of 2020 I was working on a research project advised by Lee Wee Sun on neural-based memory for RL and SLAM.

Mila (Quebec AI Institute)

At Mila, I worked on exploration in a model-based reinforcement learning setting. This was during my final year of my undergraduate degree at McGill. The project was mentored by Vincent François-Lavet and advised by Joelle Pineau. Our paper on this was accepted to NeurIPS 2020 for an oral presentation.

Microsoft Research Montreal

During my time at Microsoft Research, I worked on reinforcement learning and natural language problems. I was advised by Marc-Alexandre Côté on projects including:

  • Adaptive text action spaces: see the above paper.

  • TextWorld: a reinforcement learning framework for agents to learn text-based games.

Articles

  • After my tumultuous time applying for and deciding on a PhD program, I wrote a short guide on how to successfully apply for a CS PhD.

  • I wrote a short piece on the dangers of reshaping in tensor manipulation libraries (PyTorch in this case).

Miscellaneous things about me

  • I have a cooking blog you should check out!

  • In a past life (and technically still), I was a lieutenant in the Singapore Armed Forces and a Combat Engineer by vocation.

  • I spent my formative years in Beijing, and still call it home.

  • I love to read.

  • I love climbing: especially outdoor sport climbing.

Contact

Github
Google Scholar
LinkedIn
Come see me out people on my Twitter
Need a book recommendation? Check out my Goodreads