KGLens: Towards Efficient and Effective Knowledge Probing of Large Language Models with Knowledge Graphs
AuthorsDaniel Zheng, Richard Bai, Yizhe Zhang, Yi (Siri) Su, Xiaochuan Niu, Navdeep Jaitly
AuthorsDaniel Zheng, Richard Bai, Yizhe Zhang, Yi (Siri) Su, Xiaochuan Niu, Navdeep Jaitly
This paper was accepted at the Workshop Towards Knowledgeable Language Models at ACL 2024.
Large Language Models (LLMs) might hallucinate facts, while curated Knowledge Graph (KGs) are typically factually reliable especially with domain-specific knowledge. Measuring the alignment between KGs and LLMs can effectively probe the factualness and identify the knowledge blind spots of LLMs. However, verifying the LLMs over extensive KGs can be expensive. In this paper, we present KGLens, a Thompson-sampling-inspired framework aimed at effectively and efficiently measuring the alignment between KGs and LLMs. KGLens features a graph-guided question generator for converting KGs into natural language, along with a carefully designed importance sampling strategy based on parameterized KG structure to expedite KG traversal. Our simulation experiment compares the brute force method with KGLens under six different sampling methods, demonstrating that our approach achieves superior probing efficiency. Leveraging KGLens, we conducted in-depth analyses of the factual accuracy of ten LLMs across three large domain-specific KGs from Wikidata, composing over 19K edges, 700 relations, and 21K entities. Human evaluation results indicate that KGLens can assess LLMs with a level of accuracy nearly equivalent to that of human annotators, achieving 95.7% of the accuracy rate.