Denis Savenkov


2025

pdf bib
Dynamic Strategy Planning for Efficient Question Answering with Large Language Models
Tanmay Parekh | Pradyot Prakash | Alexander Radovic | Akshay Shekher | Denis Savenkov
Findings of the Association for Computational Linguistics: NAACL 2025

Research has shown an effectiveness of reasoning (e.g. Chain-of-Thought), planning (e.g. SelfAsk) and retrieval augmented generation strategies to improve performance of Large Language Models (LLMs) on various tasks, such as question answering. However, using a single fixed strategy for answering all different kinds of questions is sub-optimal in performance and inefficient in terms of generated tokens and retrievals. In our work, we propose a novel technique, DyPlan, to induce a dynamic strategy selection process in LLMs for cost-effective question-answering. DyPlan incorporates an initial decision step to select the most suitable strategy conditioned on the input question and guides the LLM’s response generation accordingly. We extend DyPlan to DyPlan-verify, adding an internal verification and correction process to further enrich the generated answer. Experimentation on three prominent multi-hop question answering (MHQA) datasets reveals how DyPlan can improve model performance by 7-13% while reducing the cost by 11-32% relative to the best baseline model.

2022

pdf bib
RetroNLU: Retrieval Augmented Task-Oriented Semantic Parsing
Vivek Gupta | Akshat Shrivastava | Adithya Sagar | Armen Aghajanyan | Denis Savenkov
Proceedings of the 4th Workshop on NLP for Conversational AI

While large pre-trained language models accumulate a lot of knowledge in their parameters, it has been demonstrated that augmenting it with non-parametric retrieval-based memory has a number of benefits ranging from improved accuracy to data efficiency for knowledge-focused tasks such as question answering. In this work, we apply retrieval-based modeling ideas to the challenging complex task of multi-domain task-oriented semantic parsing for conversational assistants. Our technique, RetroNLU, extends a sequence-to-sequence model architecture with a retrieval component, which is used to retrieve existing similar samples and present them as an additional context to the model. In particular, we analyze two settings, where we augment an input with (a) retrieved nearest neighbor utterances (utterance-nn), and (b) ground-truth semantic parses of nearest neighbor utterances (semparse-nn). Our technique outperforms the baseline method by 1.5% absolute macro-F1, especially at the low resource setting, matching the baseline model accuracy with only 40% of the complete data. Furthermore, we analyse the quality, model sensitivity, and performance of the nearest neighbor retrieval component’s for semantic parses of varied utterance complexity.

2017

pdf bib
EviNets: Neural Networks for Combining Evidence Signals for Factoid Question Answering
Denis Savenkov | Eugene Agichtein
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.

2016

pdf bib
Crowdsourcing for (almost) Real-time Question Answering
Denis Savenkov | Scott Weitzner | Eugene Agichtein
Proceedings of the Workshop on Human-Computer Question Answering

2015

pdf bib
Relation Extraction from Community Generated Question-Answer Pairs
Denis Savenkov | Wei-Lwun Lu | Jeff Dalton | Eugene Agichtein
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop