Authors: Jugal Gajjar, Kamalasankari Subramaniakuppusamy
RSAT trains small language models to produce step-by-step table reasoning with cell-level citations. Using a two-phase SFT+GRPO pipeline, RSAT improves attribution faithfulness 3.7× while maintaining structured, verifiable outputs. Post-hoc attribution fails (~13% success), highlighting the need for integrated reasoning.
Authors: Kamalasankari Subramaniakuppusamy, Jugal Gajjar
Introduces the Feature Attribution Stability Suite (FASS), a benchmark measuring post-hoc attribution stability under geometric, photometric, and compression perturbations. FASS enforces prediction-invariance filtering and evaluates Grad-CAM, IG, GradientSHAP, LIME, showing method-level stability differences, with Grad-CAM consistently most stable.
Author: Jugal Gajjar
Introduces Adaptive RAG, a reinforcement learning framework that dynamically selects retrieval counts based on query difficulty and model confidence. Achieves 3.2–6.5% higher accuracy while reducing retrievals by 14–37% across 7 models (3.8B–120B parameters) on 9 QA datasets, demonstrating efficient, adaptive behavior.
Authors: Jugal Gajjar, Kaustik Ranaware, Kamalasankari Subramaniakuppusamy, Vaibhav Gandhi
Introduces HyperComplEx, a hybrid multi-space embedding framework combining hyperbolic, complex, and Euclidean geometries with adaptive relation-specific attention. Achieves 0.612 MRR on 10M-node CS graph and demonstrates near-linear scalability with interpretable geometric reasoning. Ablation studies highlight the importance of adaptive attention and multi-space consistency.
Authors: Jugal Gajjar, Kaustik Ranaware, Kamalasankari Subramaniakuppusamy
Our hybrid framework combines heterogeneous graph representations with local LLMs for Java vulnerability detection. Achieves 93.57% accuracy — an 8.36% gain over GAT embeddings and 17.81% over pretrained LLM baselines. Extracts salient subgraphs with natural language explanations.
Authors: Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Relsy Puthal, Kaustik Ranaware
Hybrid repair framework integrating Bandit with lightweight local LLMs (<8B params) in an iterative detect-repair-validate loop. Reduces false positives by 10.8%, improves fix accuracy by 13.51%. Developer explanation quality: 4.5/5.
Authors: Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Noha El Kachach
Language-agnostic multi-stage AI pipeline using fine-tuned Qwen2.5-Coder-3B with LoRA within MLX framework. Covers 14 programming languages. Usefulness: 8.06/10, interpretability: 7.40/10, readability: 7.53/10.
Authors: Jugal Gajjar, Kamalasankari Subramaniakuppusamy
Large-scale language-agnostic dataset unifying code across ten major programming languages. Over seven million parsed source files under a universal AST schema for cross-language reasoning and multilingual software analysis. Published on Hugging Face.
Authors: Jugal Gajjar, Kaustik Ranaware
Multimodal sentiment analysis on CMU-MOSEI using transformer-based models with early fusion of text, audio, and visual modalities. Achieves 97.87% 7-class accuracy and 0.9682 F1-score.
Author: Jugal Gajjar
Advisor: Prof. Shi Feng
Presents a unified cross-language vulnerability lifecycle framework that combines detection, validation, and remediation using Universal AST normalization and hybrid AI reasoning. Demonstrates high detection accuracy (89.84–92.02%), cross-language transfer (74–78% F1), execution-based validation confirming 66–71% of genuine vulnerabilities, and iterative remediation success (81–87%). Enables efficient, locally deployable CI/CD integration, supporting scalable and reliable security lifecycle management across Java, Python, and C++.