Paper
Event
CAI Seminar - LLMs and Knowledge Graphs: An attempt towards explainability.

LLMs and Knowledge Graphs: An attempt towards explainability.

Abstract: Large Language Models (LLMs) have achieved remarkable success in natural language understanding and generation, yet their decision-making processes often remain opaque. As these models are increasingly applied in sensitive domains, the demand for explainability has become critical. Knowledge graphs, with their structured representation of entities and relationships, offer a promising pathway to enhance transparency and interpretability in LLMs.

In this talk, Dr. Debayan Banerjee from Leuphana University of Lüneburg will explore the intersection of LLMs and knowledge graphs as a means to achieve explainability. The session will highlight how integrating symbolic reasoning with neural architectures can provide clearer insights into model outputs, reduce hallucinations, and improve trustworthiness.

Key themes will include:

  • Challenges of explainability in current LLM architectures.

  • The role of knowledge graphs in grounding LLM inference.

  • Techniques for combining structured knowledge with generative models.

  • Applications in domains requiring transparency, such as healthcare, law, and education.

  • Future directions for explainable AI through hybrid approaches.

The presentation will emphasize both theoretical foundations and practical case studies, showing how knowledge graph–enhanced LLMs can bridge the gap between performance and interpretability. By advancing explainability, this approach paves the way for more trustworthy and responsible AI systems.