I am the Technical Strategist for Responsible AI in the CTO office at Bloomberg where I develop and implement the vision and framework for Responsible AI at the company. My goal is to shape policies and practices to enable and facilitate the development of AI solutions that are trustworthy, transparent, and reliable. Previously, as Head of NLP, I directed the development and adoption of language technology. Before joining Bloomberg, I was a researcher at Google working on evaluation of large language models. I hold a Ph.D. from Harvard University.
If you are interested in positions at Bloomberg, please check our AI careers page.
Research
My research interests include natural language generation, model evaluation, and interpretability. I particilarly like working on large multi-disciplinary collaborations, for example the GEM benchmark. I was able to contribute to multiple LLM projects such as PaLM, PaLM 2, BLOOM, and BloombergGPT. My interactive visualization tools like GLTR, LSTMVis, and exBERT have had over a million users and are widely used to teach neural networks for NLP.
You can find a full list of papers on my Google Scholar.
Selected Talks and Presentations
- BloombergGPT: A Large Language Model for Finance (Youtube), Synthetic Intelligence Forum, 2023
- Do we know what we don’t know? The state of evaluation in NLP (Slides), Stanford NLP Seminar, 2022
- Measuring the Quality of Language Generation Systems (Slides), SIGGEN/SICSA Seminar, 2022
- It’s time to fix evaluation of generated text (Slides), Keynote at the Summarization Workshop at EMNLP 2021
- Tutorial on Interpretability (Slides) at ACL 2020 with Yonatan Belinkov and Ellie Pavlick
Selected Press and Media
- NLPs and the Hurdles Halting Their True Potential, AI Magazine, 2024
- There’s A New AI Tool That Can Spot Text Written By AI, Even When Humans Are Fooled, Huffpost, 2019
- AI now can spot fake news generated by AI, CNET, 2019
- IBM, Harvard develop tool to tackle black box problem in AI translation , VentureBeat, 2018