Leveraging Power of Large Language Model in Entity Linking via Multi-step Prompting and Targeted Reasoning
AuthorsYajie Li†, Albert Galimov†, Mitra Datta Ganapaneni†, Pujitha Thejaswi†, De Meng, Priyanshu Kumar, Saloni Potdar
Leveraging Power of Large Language Model in Entity Linking via Multi-step Prompting and Targeted Reasoning
AuthorsYajie Li†, Albert Galimov†, Mitra Datta Ganapaneni†, Pujitha Thejaswi†, De Meng, Priyanshu Kumar, Saloni Potdar
Entity Linking (EL) has traditionally relied on large annotated datasets and extensive model fine-tuning. While recent few-shot methods leverage large language models (LLMs) through prompting to reduce training requirements, they often suffer from inefficiencies due to expensive LLM-based reasoning. ARTER (Adaptive Routing and Targeted Entity Reasoning) presents a structured pipeline that achieves high performance without deep fine-tuning by strategically combining candidate generation, context-based scoring, adaptive routing, and selective reasoning. ARTER computes a small set of complementary signals(both embedding and LLM-based) over the retrieved candidates to categorize contextual mentions into easy and hard cases. The cases are then handled by a low-computational entity linker (e.g. ReFinED) and more expensive targeted LLM-based reasoning respectively. On standard benchmarks, ARTER outperforms ReFinED by up to +4.47%, with an average gain of +2.53% on 5 out of 6 datasets, and performs comparably to pipelines using LLM-based reasoning for all mentions, while being as twice as efficient in terms of the number of LLM tokens.
AbstRaL: Augmenting LLMs’ Reasoning by Reinforcing Abstract Thinking
June 22, 2025research area Methods and Algorithms
Recent studies have shown that large language models (LLMs), especially smaller ones, often lack robustness in their reasoning. I.e., they tend to experience performance drops when faced with distribution shifts, such as changes to numerical or nominal variables, or insertions of distracting clauses. A possible strategy to address this involves generating synthetic data to further “instantiate” reasoning problems on potential variations. In…
Entity Disambiguation via Fusion Entity Decoding
June 4, 2024research area Knowledge Bases and Search, research area Speech and Natural Language Processingconference NAACL
Entity disambiguation (ED), which links the mentions of ambiguous entities to their referent entities in a knowledge base, serves as a core component in entity linking (EL). Existing generative approaches demonstrate improved accuracy compared to classification approaches under the standardized ZELDA benchmark. Nevertheless, generative approaches suffer from the need for large-scale pre-training and inefficient generation. Most importantly,…