Research Focus

Advancing the frontiers of artificial intelligence through knowledge-infused reasoning, novel inference methods, and robust data science.

Our Core Research Pillars

Knowledge-Infused Neuro-Symbolic AI

We embed symbolic knowledge (clinical guidelines, knowledge graphs, domain axioms) directly into neural architectures, reducing hallucination in mental health and scientific domains.

Hallucination Mitigation & Symbolic Robustness

We precision-locate hallucination sources when LLMs process symbolic triggers (negation, numbers, entities) and develop targeted neuron-level interventions to patch failures.

Explainable & Trustworthy Systems

We evaluate attention fidelity, moral consistency, and bias amplification, providing mathematical guarantees for interpretable AI decisions in clinical and cybersecurity contexts.

Scientific Attribution & Verification

We build RAG systems with rationale-driven selection (METEORA) that generate human-verifiable citations, ensuring trustworthy source attribution for scientific claims.

Domain-Centered AI Development

We specialize in mental health, cybersecurity, and biomedicine, creating domain-specific datasets (LOST, CounterLogic, REASONS) infused with clinical and threat intelligence.

Bias & Safety Evaluation

We systematically measure societal biases and adversarial vulnerabilities in LLMs through human-interpretable prompts, bias studies, and unlearning assessments for safe deployment.

Our Technical Approaches

We develop transformer-based models that embed symbolic knowledge directly into self-attention mechanisms and neural representation spaces. By infusing curated knowledge graphs, clinical guidelines, and domain axioms into the model's reasoning pathways, we create systems that combine statistical learning with logical constraints, ensuring outputs are both contextually aware and grounded in validated facts—fundamentally reducing hallucination in scientific and healthcare applications.

We create precision methods to locate where hallucinations emerge within LLMs when processing symbolic triggers like negation, modifiers, numbers, and named entities. Through neuron-level analysis and constraint-based probing of models like Gemma, we identify vulnerable internal components and develop targeted interventions that patch these failure points, enabling systematic debugging of factual errors before deployment.

We architect retrieval-augmented generation systems that replace heuristic ranking with selection based on explicit reasoning chains. Our METEORA framework generates human-interpretable rationales for document selection, enabling trustworthy citation generation in scientific literature and high-stakes domains where verifiable attribution is critical for regulatory compliance and clinical decision support.

We build comprehensive evaluation suites that quantify model consistency, moral reliability, and bias amplification across diverse contexts. Using counterfactual reasoning (CounterLogic), adversarial prompting, and domain-specific stress tests for mental health and cybersecurity applications, we provide mathematical guarantees on safety-critical performance, ensuring AI systems remain trustworthy when deployed in socially sensitive environments.

Visualizing Knowledge & Inference

Knowledge Graph Visualization

This diagram visualizes the complex interplay between human-like thought processes and advanced computational logic, a core theme in our knowledge-infused AI research.

Explore Related Publications

Example Projects & Applications

Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context

Adversarial Attacks in LLMs

Human-readable situation-driven attacks are a potential threat to open-source and black-box models.

Read Paper

IMRNNs: An Efficient Method for Interpretable Dense Retrieval via Embedding Modulation

Information Retrieval, Interpretability, Trustworthy AI

IMRNNs aims to make dense retrieval in RAG interpretable by exposing how queries and documents align, while improving retrieval quality through lightweight bidirectional embedding modulation. In practice, it enables faster debugging and higher trust in retrieval behavior, and it boosts effectiveness across benchmarks without relying on expensive re-rankers.

Generation-Time vs. Post-hoc Citation: A Holistic Evaluation of LLM Attribution

Source Attribution, Trustworthy AI

This work compares generation-time and post hoc citation strategies to understand which one produces more reliable source attribution in LLM outputs. We show that retrieval quality dominates citation quality, and that a post hoc, retrieval-first approach can deliver more complete citations with competitive correctness for high-stakes settings.

Read Paper

Ranking Free RAG: Replacing Re-ranking with Selection in RAG for Sensitive Domains

Information Retrieval, RAG, Interpretability, Trustworthy AI

METEORA aims to make RAG evidence selection explainable and poisoning-resilient by replacing fixed top-k retrieval with rationale-driven, adaptive cutoffs plus verification. It improves recall and precision while using far less evidence, boosts downstream answer accuracy, and significantly strengthens robustness against adversarial or misleading content in high-stakes domains.

Read Paper

Our Valued Collaborators

NSF
NIH
IBM Research
UMBC