Member-only story
Experimenting LlamaIndex RouterQueryEngine with Document Management
How RouterQueryEngine works in a DevSecOps chatbot

We have explored a few of the query engines LlamaIndex offers in the last few weeks, each handling different use cases. In this article, we are going to explore RouterQueryEngine
, which selects one out of several candidate query engines to execute a query.
We continue to stay in DevSecOps space for our chatbot use case. We will convert my blog on pipeline security and guardrails into a PDF and ask questions about that PDF file. We will observe how LlamaIndex RouterQueryEngine
routes our queries to different query engines.
Let’s get started.
RouterQueryEngine
There are many different techniques for LLM-based queries over your private data. To name a few:
- Summarization
- Top-k semantic search
- Complex queries such as compare and contrast
RouterQueryEngine
is offered by LlamaIndex as the one single interface which routes your queries to different query engines. Let’s look at the high-level architecture of how we are going to use RouterQueryEngine
in our chatbot:

How Indexes Work
Before jumping into the implementation details, let’s first examine how indexes work. We will implement two types of indexes — list index and vector store index, in our chatbot.
List Index
The list index stores nodes as a sequential chain. The document texts are chunked up, converted to nodes, and stored in a list during index construction. During queries, LlamaIndex loads all nodes in the list into the Response Synthesis module. The list index is best suited for summarization.

The list index offers many ways of querying a list index, from an embedding-based query that…