Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in generating fluent text and answering complex queries. However, their reliance on statistical patterns alone often limits factual accuracy, consistency, and reasoning depth. Knowledge graphs, with their structured representation of entities and relationships, offer a powerful complement to LLMs by grounding inference in explicit, verifiable information.
In this talk, Dr. Raghava Muthuraju and Dr. Suman Roy from Oracle will explore how integrating knowledge graphs with LLM inference can significantly enhance the reliability and interpretability of AI systems. The session will highlight methods for combining symbolic reasoning with neural generation, enabling models to deliver responses that are not only coherent but also contextually accurate and logically consistent.
Key themes will include:
The limitations of standalone LLMs in factual reasoning and knowledge retention.
Techniques for augmenting LLM inference with structured knowledge graphs.
Applications in domains such as healthcare, enterprise search, and intelligent assistants.
Challenges in scalability, integration, and maintaining up-to-date knowledge bases.
The presentation will showcase case studies and research prototypes that demonstrate the synergy between LLMs and knowledge graphs. By bridging statistical learning with structured reasoning, this approach paves the way for more trustworthy, explainable, and domain-adaptive AI systems.