Leveraging the Power of Large Language Models in Entity Linking via Adaptive Routing and Targeted Reasoning
AuthorsYajie Li†, Albert Galimov†, Mitra Datta Ganapaneni†, Pujitha Thejaswi†, De Meng, Priyanshu Kumar, Saloni Potdar
Leveraging the Power of Large Language Models in Entity Linking via Adaptive Routing and Targeted Reasoning
AuthorsYajie Li†, Albert Galimov†, Mitra Datta Ganapaneni†, Pujitha Thejaswi†, De Meng, Priyanshu Kumar, Saloni Potdar
Entity Linking (EL) has traditionally relied on large annotated datasets and extensive model fine-tuning. While recent few-shot methods leverage large language models (LLMs) through prompting to reduce training requirements, they often suffer from inefficiencies due to expensive LLM-based reasoning. ARTER (Adaptive Routing and Targeted Entity Reasoning) presents a structured pipeline that achieves high performance without deep fine-tuning by strategically combining candidate generation, context-based scoring, adaptive routing, and selective reasoning. ARTER computes a small set of complementary signals(both embedding and LLM-based) over the retrieved candidates to categorize contextual mentions into easy and hard cases. The cases are then handled by a low-computational entity linker (e.g. ReFinED) and more expensive targeted LLM-based reasoning respectively. On standard benchmarks, ARTER outperforms ReFinED by up to +4.47%, with an average gain of +2.53% on 5 out of 6 datasets, and performs comparably to pipelines using LLM-based reasoning for all mentions, while being as twice as efficient in terms of the number of LLM tokens.
LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning
April 28, 2026research area Speech and Natural Language Processingconference ICLR
Large Language Models (LLMs) demonstrate their reasoning ability through chain-of-thought (CoT) generation. However, LLM’s autoregressive decoding may limit the ability to revisit and refine earlier tokens in a holistic manner, which can also lead to inefficient exploration for diverse solutions. In this paper, we propose LaDiR (Latent Diffusion Reasoner), a novel reasoning framework that unifies the expressiveness of continuous latent…
Entity Disambiguation via Fusion Entity Decoding
June 4, 2024research area Knowledge Bases and Search, research area Speech and Natural Language Processingconference NAACL
Entity disambiguation (ED), which links the mentions of ambiguous entities to their referent entities in a knowledge base, serves as a core component in entity linking (EL). Existing generative approaches demonstrate improved accuracy compared to classification approaches under the standardized ZELDA benchmark. Nevertheless, generative approaches suffer from the need for large-scale pre-training and inefficient generation. Most importantly,…