Member-only story
Exploring ReAct Agent for Better Prompting in RAG Pipeline
Analyzing Amazon’s recent disclosures and attitudes towards LLMs with LlamaIndex’s ReAct Agent and Cybersyn data from the Snowflake Marketplace
Published in
9 min readSep 22, 2023

There are many types of prompting techniques. To name a few:
- Zero-shot prompting: This is where the model is given no prior information about the task it is supposed to perform. The prompt describes the task, and the model is expected to figure out how.
- Few-shot prompting: This is similar to zero-shot prompting, but the model is given a few examples of the task. This helps the model to learn the task more quickly.
- Chain-of-thought (CoT) prompting: This technique involves providing the model with a sequence of prompts that guide it through the task. The prompts are typically related, and the model is expected to use the information from previous prompts to generate the next prompt.
- Tree of thoughts (ToT) prompting: This technique provides the model with a tree-like structure of prompts. The prompts are arranged to reflect the logical relationships between them. This helps the model to understand the task more easily and generate more accurate outputs.