Graduate student at The George Washington University working on reasoning-centric AI systems with a focus on Large Language Models, trustworthy and explainable AI, and AI for code intelligence and security.
Structured attribution framework for training small LLMs to generate step-by-step reasoning with explicit citations, improving faithfulness and interpretability.
Reinforcement learning-based retrieval framework that learns when and how much to retrieve, improving efficiency and reasoning in LLM systems.
Graph + LLM framework combining structural and semantic reasoning for explainable vulnerability detection in source code.