I am a PhD candidate at the Language Technologies Institute in Carnegie Mellon University, advised by Prof. Yulia Tsvetkov. I was fortunate enough to also be advised by the late Prof. Jaime Carbonell during my initial years of PhD. I completed my masters at LTI in 2019, advised by Prof. William Cohen.
I’ve spent a few amazing summers as a Research Intern — in 2021 at AI2 with Dr Matthew Peters and Dr Pradeep Dasigi, in 2020 at Google Brain hosted by Niki Parmar and Dr Ashish Vaswani and in 2019 with Google Language team hosted by Dr William Cohen and Dr Michael Collins.
I work on building adaptable, trustworthy and reliable NLP systems through interpretable and factual model designs. I have worked on incorporating factuality and interpretability in various downstream NLP tasks (language generation, classification, dialog systems, question answering).
Previously, I completed my bachelors from PESIT, Bangalore and worked at Flipkart, India for 2 years.
PhD in Language Technologies, 2024
Carnegie Mellon University
MS in Language Technologies, 2019
Carnegie Mellon University
BE in Computer Science and Engineering, 2015
PES Institute of Technology
[21/01/24] Check out 2 new preprints on Fine-Grained Hallucination in LLMs and Preserving Perspectives in News Summarization.
[18/01/24] Our paper Knowledge Card: Filling LLMs’ Knowledge Gaps with Plug-in Specialized Language Models was accepted to ICLR 2024 as an Oral! See you in Vienna!
[12/01/23] Gave a talk on Understanding and Mitigating Factual Inconsistencies in Language Generation at the ML Collective - Deep Learning: Classics and Trends Reading Group! Thank you for having me!
[07/10/23] Our paper FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge was accepted to EMNLP (Main Conference) 2023.
[28/08/23] Excited to be selected as an EECS Rising Star 2023!
[09/07/23] Attending ACL 2023 to present LEXplain at *SEM 2023.
[20/06/23] Gave a talk on Reporting and Mitigating Language Model Harms at the Center for Security and Emerging Technology at Georgetown University!
[16/06/23] Our paper’LEXplain: Improving Model Explanations via Lexicon Supervision’ was accepted to *SEM.
[18/05/23] Gave a talk on Generalizable Factual Error Correction at the DARPA SemaFor group!
[05/05/23] Presented our survey paper on Mitigating LM Risks at EACL 2023.
[17/04/23] Proposed my thesis on ‘Designing Transparent and Factual Text Generation Systems Grounded in Linguistic Structures’.
[21/01/23] 2 papers accepted to EACL 2023!